00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2257 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3516 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.083 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.084 The recommended git tool is: git 00:00:00.085 using credential 00000000-0000-0000-0000-000000000002 00:00:00.088 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.131 Fetching changes from the remote Git repository 00:00:00.133 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.190 Using shallow fetch with depth 1 00:00:00.190 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.190 > git --version # timeout=10 00:00:00.237 > git --version # 'git version 2.39.2' 00:00:00.237 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.282 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.282 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:10.238 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:10.250 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:10.261 Checking out Revision 1913354106d3abc3c9aeb027a32277f58731b4dc (FETCH_HEAD) 00:00:10.261 > git config core.sparsecheckout # timeout=10 00:00:10.271 > git read-tree -mu HEAD # timeout=10 00:00:10.286 > git checkout -f 1913354106d3abc3c9aeb027a32277f58731b4dc # timeout=5 00:00:10.308 Commit message: "jenkins: update jenkins to 2.462.2 and update plugins to its latest versions" 00:00:10.308 > git rev-list --no-walk 1913354106d3abc3c9aeb027a32277f58731b4dc # timeout=10 00:00:10.411 [Pipeline] Start of Pipeline 00:00:10.427 [Pipeline] library 00:00:10.429 Loading library shm_lib@master 00:00:10.429 Library shm_lib@master is cached. Copying from home. 00:00:10.448 [Pipeline] node 00:00:10.460 Running on VM-host-SM16 in /var/jenkins/workspace/ubuntu22-vg-autotest 00:00:10.462 [Pipeline] { 00:00:10.471 [Pipeline] catchError 00:00:10.474 [Pipeline] { 00:00:10.487 [Pipeline] wrap 00:00:10.497 [Pipeline] { 00:00:10.506 [Pipeline] stage 00:00:10.508 [Pipeline] { (Prologue) 00:00:10.526 [Pipeline] echo 00:00:10.527 Node: VM-host-SM16 00:00:10.534 [Pipeline] cleanWs 00:00:10.542 [WS-CLEANUP] Deleting project workspace... 00:00:10.542 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.549 [WS-CLEANUP] done 00:00:10.760 [Pipeline] setCustomBuildProperty 00:00:10.842 [Pipeline] httpRequest 00:00:11.569 [Pipeline] echo 00:00:11.571 Sorcerer 10.211.164.101 is alive 00:00:11.582 [Pipeline] retry 00:00:11.585 [Pipeline] { 00:00:11.600 [Pipeline] httpRequest 00:00:11.604 HttpMethod: GET 00:00:11.604 URL: http://10.211.164.101/packages/jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:11.605 Sending request to url: http://10.211.164.101/packages/jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:11.606 Response Code: HTTP/1.1 200 OK 00:00:11.607 Success: Status code 200 is in the accepted range: 200,404 00:00:11.607 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:12.820 [Pipeline] } 00:00:12.837 [Pipeline] // retry 00:00:12.844 [Pipeline] sh 00:00:13.123 + tar --no-same-owner -xf jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:13.140 [Pipeline] httpRequest 00:00:13.522 [Pipeline] echo 00:00:13.524 Sorcerer 10.211.164.101 is alive 00:00:13.535 [Pipeline] retry 00:00:13.537 [Pipeline] { 00:00:13.553 [Pipeline] httpRequest 00:00:13.558 HttpMethod: GET 00:00:13.559 URL: http://10.211.164.101/packages/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:00:13.559 Sending request to url: http://10.211.164.101/packages/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:00:13.583 Response Code: HTTP/1.1 200 OK 00:00:13.583 Success: Status code 200 is in the accepted range: 200,404 00:00:13.584 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:01:12.960 [Pipeline] } 00:01:12.984 [Pipeline] // retry 00:01:12.993 [Pipeline] sh 00:01:13.280 + tar --no-same-owner -xf spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:01:15.825 [Pipeline] sh 00:01:16.105 + git -C spdk log --oneline -n5 00:01:16.105 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:16.105 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:16.105 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:16.105 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:16.105 9469ea403 nvme/fio_plugin: add trim support 00:01:16.125 [Pipeline] writeFile 00:01:16.141 [Pipeline] sh 00:01:16.422 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:16.434 [Pipeline] sh 00:01:16.714 + cat autorun-spdk.conf 00:01:16.715 SPDK_TEST_UNITTEST=1 00:01:16.715 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.715 SPDK_TEST_NVME=1 00:01:16.715 SPDK_TEST_BLOCKDEV=1 00:01:16.715 SPDK_RUN_ASAN=1 00:01:16.715 SPDK_RUN_UBSAN=1 00:01:16.715 SPDK_TEST_RAID5=1 00:01:16.715 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:16.722 RUN_NIGHTLY=1 00:01:16.724 [Pipeline] } 00:01:16.737 [Pipeline] // stage 00:01:16.752 [Pipeline] stage 00:01:16.754 [Pipeline] { (Run VM) 00:01:16.767 [Pipeline] sh 00:01:17.048 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:17.048 + echo 'Start stage prepare_nvme.sh' 00:01:17.048 Start stage prepare_nvme.sh 00:01:17.048 + [[ -n 1 ]] 00:01:17.048 + disk_prefix=ex1 00:01:17.048 + [[ -n /var/jenkins/workspace/ubuntu22-vg-autotest ]] 00:01:17.049 + [[ -e /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf ]] 00:01:17.049 + source /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf 00:01:17.049 ++ SPDK_TEST_UNITTEST=1 00:01:17.049 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.049 ++ SPDK_TEST_NVME=1 00:01:17.049 ++ SPDK_TEST_BLOCKDEV=1 00:01:17.049 ++ SPDK_RUN_ASAN=1 00:01:17.049 ++ SPDK_RUN_UBSAN=1 00:01:17.049 ++ SPDK_TEST_RAID5=1 00:01:17.049 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:17.049 ++ RUN_NIGHTLY=1 00:01:17.049 + cd /var/jenkins/workspace/ubuntu22-vg-autotest 00:01:17.049 + nvme_files=() 00:01:17.049 + declare -A nvme_files 00:01:17.049 + backend_dir=/var/lib/libvirt/images/backends 00:01:17.049 + nvme_files['nvme.img']=5G 00:01:17.049 + nvme_files['nvme-cmb.img']=5G 00:01:17.049 + nvme_files['nvme-multi0.img']=4G 00:01:17.049 + nvme_files['nvme-multi1.img']=4G 00:01:17.049 + nvme_files['nvme-multi2.img']=4G 00:01:17.049 + nvme_files['nvme-openstack.img']=8G 00:01:17.049 + nvme_files['nvme-zns.img']=5G 00:01:17.049 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:17.049 + (( SPDK_TEST_FTL == 1 )) 00:01:17.049 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:17.049 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:17.049 + for nvme in "${!nvme_files[@]}" 00:01:17.049 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:17.049 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:17.049 + for nvme in "${!nvme_files[@]}" 00:01:17.049 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:17.049 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:17.049 + for nvme in "${!nvme_files[@]}" 00:01:17.049 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:17.049 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:17.049 + for nvme in "${!nvme_files[@]}" 00:01:17.049 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:17.049 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:17.049 + for nvme in "${!nvme_files[@]}" 00:01:17.049 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:17.049 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:17.049 + for nvme in "${!nvme_files[@]}" 00:01:17.049 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:17.049 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:17.049 + for nvme in "${!nvme_files[@]}" 00:01:17.049 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:17.049 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:17.049 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:17.308 + echo 'End stage prepare_nvme.sh' 00:01:17.308 End stage prepare_nvme.sh 00:01:17.320 [Pipeline] sh 00:01:17.601 + DISTRO=ubuntu2204 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:17.602 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -H -a -v -f ubuntu2204 00:01:17.602 00:01:17.602 DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant 00:01:17.602 SPDK_DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk 00:01:17.602 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu22-vg-autotest 00:01:17.602 HELP=0 00:01:17.602 DRY_RUN=0 00:01:17.602 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img, 00:01:17.602 NVME_DISKS_TYPE=nvme, 00:01:17.602 NVME_AUTO_CREATE=0 00:01:17.602 NVME_DISKS_NAMESPACES=, 00:01:17.602 NVME_CMB=, 00:01:17.602 NVME_PMR=, 00:01:17.602 NVME_ZNS=, 00:01:17.602 NVME_MS=, 00:01:17.602 NVME_FDP=, 00:01:17.602 SPDK_VAGRANT_DISTRO=ubuntu2204 00:01:17.602 SPDK_VAGRANT_VMCPU=10 00:01:17.602 SPDK_VAGRANT_VMRAM=12288 00:01:17.602 SPDK_VAGRANT_PROVIDER=libvirt 00:01:17.602 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:17.602 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:17.602 SPDK_OPENSTACK_NETWORK=0 00:01:17.602 VAGRANT_PACKAGE_BOX=0 00:01:17.602 VAGRANTFILE=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:17.602 FORCE_DISTRO=true 00:01:17.602 VAGRANT_BOX_VERSION= 00:01:17.602 EXTRA_VAGRANTFILES= 00:01:17.602 NIC_MODEL=e1000 00:01:17.602 00:01:17.602 mkdir: created directory '/var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt' 00:01:17.602 /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt /var/jenkins/workspace/ubuntu22-vg-autotest 00:01:20.892 Bringing machine 'default' up with 'libvirt' provider... 00:01:21.151 ==> default: Creating image (snapshot of base box volume). 00:01:21.410 ==> default: Creating domain with the following settings... 00:01:21.410 ==> default: -- Name: ubuntu2204-22.04-1711172311-2200_default_1728278424_d3842ff296adfff854d3 00:01:21.410 ==> default: -- Domain type: kvm 00:01:21.410 ==> default: -- Cpus: 10 00:01:21.410 ==> default: -- Feature: acpi 00:01:21.410 ==> default: -- Feature: apic 00:01:21.410 ==> default: -- Feature: pae 00:01:21.410 ==> default: -- Memory: 12288M 00:01:21.410 ==> default: -- Memory Backing: hugepages: 00:01:21.410 ==> default: -- Management MAC: 00:01:21.410 ==> default: -- Loader: 00:01:21.410 ==> default: -- Nvram: 00:01:21.410 ==> default: -- Base box: spdk/ubuntu2204 00:01:21.410 ==> default: -- Storage pool: default 00:01:21.410 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2204-22.04-1711172311-2200_default_1728278424_d3842ff296adfff854d3.img (20G) 00:01:21.410 ==> default: -- Volume Cache: default 00:01:21.410 ==> default: -- Kernel: 00:01:21.410 ==> default: -- Initrd: 00:01:21.410 ==> default: -- Graphics Type: vnc 00:01:21.410 ==> default: -- Graphics Port: -1 00:01:21.410 ==> default: -- Graphics IP: 127.0.0.1 00:01:21.410 ==> default: -- Graphics Password: Not defined 00:01:21.410 ==> default: -- Video Type: cirrus 00:01:21.410 ==> default: -- Video VRAM: 9216 00:01:21.410 ==> default: -- Sound Type: 00:01:21.410 ==> default: -- Keymap: en-us 00:01:21.410 ==> default: -- TPM Path: 00:01:21.410 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:21.410 ==> default: -- Command line args: 00:01:21.410 ==> default: -> value=-device, 00:01:21.410 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:21.410 ==> default: -> value=-drive, 00:01:21.410 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:01:21.410 ==> default: -> value=-device, 00:01:21.410 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:21.410 ==> default: Creating shared folders metadata... 00:01:21.410 ==> default: Starting domain. 00:01:23.311 ==> default: Waiting for domain to get an IP address... 00:01:33.303 ==> default: Waiting for SSH to become available... 00:01:34.247 ==> default: Configuring and enabling network interfaces... 00:01:38.437 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:43.709 ==> default: Mounting SSHFS shared folder... 00:01:44.645 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output => /home/vagrant/spdk_repo/output 00:01:44.645 ==> default: Checking Mount.. 00:01:45.581 ==> default: Folder Successfully Mounted! 00:01:45.581 ==> default: Running provisioner: file... 00:01:45.840 default: ~/.gitconfig => .gitconfig 00:01:46.098 00:01:46.098 SUCCESS! 00:01:46.098 00:01:46.098 cd to /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt and type "vagrant ssh" to use. 00:01:46.098 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:46.098 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt" to destroy all trace of vm. 00:01:46.098 00:01:46.108 [Pipeline] } 00:01:46.123 [Pipeline] // stage 00:01:46.132 [Pipeline] dir 00:01:46.133 Running in /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt 00:01:46.135 [Pipeline] { 00:01:46.147 [Pipeline] catchError 00:01:46.149 [Pipeline] { 00:01:46.161 [Pipeline] sh 00:01:46.441 + vagrant ssh-config --host vagrant 00:01:46.441 + sed -ne /^Host/,$p 00:01:46.441 + tee ssh_conf 00:01:49.733 Host vagrant 00:01:49.733 HostName 192.168.121.194 00:01:49.733 User vagrant 00:01:49.733 Port 22 00:01:49.733 UserKnownHostsFile /dev/null 00:01:49.733 StrictHostKeyChecking no 00:01:49.733 PasswordAuthentication no 00:01:49.733 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2204/22.04-1711172311-2200/libvirt/ubuntu2204 00:01:49.733 IdentitiesOnly yes 00:01:49.733 LogLevel FATAL 00:01:49.733 ForwardAgent yes 00:01:49.733 ForwardX11 yes 00:01:49.733 00:01:49.746 [Pipeline] withEnv 00:01:49.749 [Pipeline] { 00:01:49.763 [Pipeline] sh 00:01:50.075 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:50.075 source /etc/os-release 00:01:50.075 [[ -e /image.version ]] && img=$(< /image.version) 00:01:50.075 # Minimal, systemd-like check. 00:01:50.075 if [[ -e /.dockerenv ]]; then 00:01:50.075 # Clear garbage from the node's name: 00:01:50.075 # agt-er_autotest_547-896 -> autotest_547-896 00:01:50.075 # $HOSTNAME is the actual container id 00:01:50.075 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:50.075 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:50.075 # We can assume this is a mount from a host where container is running, 00:01:50.075 # so fetch its hostname to easily identify the target swarm worker. 00:01:50.075 container="$(< /etc/hostname) ($agent)" 00:01:50.075 else 00:01:50.075 # Fallback 00:01:50.075 container=$agent 00:01:50.075 fi 00:01:50.075 fi 00:01:50.075 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:50.075 00:01:50.346 [Pipeline] } 00:01:50.363 [Pipeline] // withEnv 00:01:50.372 [Pipeline] setCustomBuildProperty 00:01:50.387 [Pipeline] stage 00:01:50.390 [Pipeline] { (Tests) 00:01:50.407 [Pipeline] sh 00:01:50.688 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:50.961 [Pipeline] sh 00:01:51.241 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:51.516 [Pipeline] timeout 00:01:51.517 Timeout set to expire in 1 hr 30 min 00:01:51.519 [Pipeline] { 00:01:51.535 [Pipeline] sh 00:01:51.815 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:52.383 HEAD is now at 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:52.395 [Pipeline] sh 00:01:52.678 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:52.952 [Pipeline] sh 00:01:53.238 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:53.516 [Pipeline] sh 00:01:53.801 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu22-vg-autotest ./autoruner.sh spdk_repo 00:01:54.061 ++ readlink -f spdk_repo 00:01:54.061 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:54.061 + [[ -n /home/vagrant/spdk_repo ]] 00:01:54.061 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:54.061 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:54.061 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:54.061 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:54.061 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:54.061 + [[ ubuntu22-vg-autotest == pkgdep-* ]] 00:01:54.061 + cd /home/vagrant/spdk_repo 00:01:54.061 + source /etc/os-release 00:01:54.061 ++ PRETTY_NAME='Ubuntu 22.04.4 LTS' 00:01:54.061 ++ NAME=Ubuntu 00:01:54.061 ++ VERSION_ID=22.04 00:01:54.061 ++ VERSION='22.04.4 LTS (Jammy Jellyfish)' 00:01:54.061 ++ VERSION_CODENAME=jammy 00:01:54.061 ++ ID=ubuntu 00:01:54.061 ++ ID_LIKE=debian 00:01:54.061 ++ HOME_URL=https://www.ubuntu.com/ 00:01:54.061 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:01:54.061 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:01:54.061 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:01:54.061 ++ UBUNTU_CODENAME=jammy 00:01:54.061 + uname -a 00:01:54.061 Linux ubuntu2204-cloud-1711172311-2200 5.15.0-101-generic #111-Ubuntu SMP Tue Mar 5 20:16:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:01:54.061 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:54.061 Hugepages 00:01:54.061 node hugesize free / total 00:01:54.061 node0 1048576kB 0 / 0 00:01:54.061 node0 2048kB 0 / 0 00:01:54.061 00:01:54.061 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:54.061 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:54.326 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:54.326 + rm -f /tmp/spdk-ld-path 00:01:54.326 + source autorun-spdk.conf 00:01:54.326 ++ SPDK_TEST_UNITTEST=1 00:01:54.326 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:54.326 ++ SPDK_TEST_NVME=1 00:01:54.326 ++ SPDK_TEST_BLOCKDEV=1 00:01:54.326 ++ SPDK_RUN_ASAN=1 00:01:54.326 ++ SPDK_RUN_UBSAN=1 00:01:54.326 ++ SPDK_TEST_RAID5=1 00:01:54.326 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:54.326 ++ RUN_NIGHTLY=1 00:01:54.326 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:54.326 + [[ -n '' ]] 00:01:54.326 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:54.326 + for M in /var/spdk/build-*-manifest.txt 00:01:54.326 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:54.326 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:54.326 + for M in /var/spdk/build-*-manifest.txt 00:01:54.326 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:54.326 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:54.326 ++ uname 00:01:54.326 + [[ Linux == \L\i\n\u\x ]] 00:01:54.326 + sudo dmesg -T 00:01:54.326 + sudo dmesg --clear 00:01:54.327 + dmesg_pid=2093 00:01:54.327 + sudo dmesg -Tw 00:01:54.327 + [[ Ubuntu == FreeBSD ]] 00:01:54.327 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:54.327 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:54.327 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:54.327 + [[ -x /usr/src/fio-static/fio ]] 00:01:54.327 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:54.327 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:54.327 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:54.327 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:01:54.327 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:54.327 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:54.327 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:54.327 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:54.327 Test configuration: 00:01:54.327 SPDK_TEST_UNITTEST=1 00:01:54.327 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:54.327 SPDK_TEST_NVME=1 00:01:54.327 SPDK_TEST_BLOCKDEV=1 00:01:54.327 SPDK_RUN_ASAN=1 00:01:54.327 SPDK_RUN_UBSAN=1 00:01:54.327 SPDK_TEST_RAID5=1 00:01:54.327 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:54.327 RUN_NIGHTLY=1 05:20:58 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:54.327 05:20:58 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:54.327 05:20:58 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:54.327 05:20:58 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:54.327 05:20:58 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:54.327 05:20:58 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:54.327 05:20:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:54.327 05:20:58 -- paths/export.sh@5 -- $ export PATH 00:01:54.327 05:20:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:54.327 05:20:58 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:54.327 05:20:58 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:54.327 05:20:58 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1728278458.XXXXXX 00:01:54.327 05:20:58 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1728278458.zGmmNs 00:01:54.327 05:20:58 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:54.327 05:20:58 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:01:54.327 05:20:58 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:54.327 05:20:58 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:54.327 05:20:58 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:54.327 05:20:58 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:54.327 05:20:58 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:54.327 05:20:58 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.327 05:20:58 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:01:54.327 05:20:58 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:54.327 05:20:58 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:54.327 05:20:58 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:54.327 05:20:58 -- spdk/autobuild.sh@16 -- $ date -u 00:01:54.327 Mon Oct 7 05:20:58 UTC 2024 00:01:54.327 05:20:58 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:54.327 LTS-66-g726a04d70 00:01:54.327 05:20:58 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:54.327 05:20:58 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:54.327 05:20:58 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:54.327 05:20:58 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:54.327 05:20:58 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.327 ************************************ 00:01:54.327 START TEST asan 00:01:54.327 ************************************ 00:01:54.327 using asan 00:01:54.327 05:20:58 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:01:54.327 00:01:54.327 real 0m0.001s 00:01:54.327 user 0m0.000s 00:01:54.327 sys 0m0.000s 00:01:54.327 05:20:58 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:54.327 ************************************ 00:01:54.327 END TEST asan 00:01:54.327 ************************************ 00:01:54.327 05:20:58 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.586 05:20:58 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:54.586 05:20:58 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:54.586 05:20:58 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:54.586 05:20:58 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:54.586 05:20:58 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.586 ************************************ 00:01:54.586 START TEST ubsan 00:01:54.586 ************************************ 00:01:54.586 using ubsan 00:01:54.586 05:20:58 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:54.586 00:01:54.586 real 0m0.000s 00:01:54.586 user 0m0.000s 00:01:54.586 sys 0m0.000s 00:01:54.586 05:20:58 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:54.586 05:20:58 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.586 ************************************ 00:01:54.586 END TEST ubsan 00:01:54.586 ************************************ 00:01:54.586 05:20:58 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:54.586 05:20:58 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:54.586 05:20:58 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:54.586 05:20:58 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:54.586 05:20:58 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:54.586 05:20:58 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:01:54.586 05:20:58 -- spdk/autobuild.sh@58 -- $ unittest_build 00:01:54.586 05:20:58 -- common/autobuild_common.sh@416 -- $ run_test unittest_build _unittest_build 00:01:54.586 05:20:58 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:01:54.586 05:20:58 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:54.586 05:20:58 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.586 ************************************ 00:01:54.586 START TEST unittest_build 00:01:54.586 ************************************ 00:01:54.586 05:20:58 -- common/autotest_common.sh@1104 -- $ _unittest_build 00:01:54.586 05:20:58 -- common/autobuild_common.sh@407 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --without-shared 00:01:54.586 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:54.586 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:54.845 Using 'verbs' RDMA provider 00:02:10.293 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:22.501 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:22.501 Creating mk/config.mk...done. 00:02:22.501 Creating mk/cc.flags.mk...done. 00:02:22.501 Type 'make' to build. 00:02:22.501 05:21:24 -- common/autobuild_common.sh@408 -- $ make -j10 00:02:22.501 make[1]: Nothing to be done for 'all'. 00:02:34.705 The Meson build system 00:02:34.705 Version: 1.4.0 00:02:34.705 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:34.705 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:34.705 Build type: native build 00:02:34.705 Program cat found: YES (/usr/bin/cat) 00:02:34.705 Project name: DPDK 00:02:34.705 Project version: 23.11.0 00:02:34.705 C compiler for the host machine: cc (gcc 11.4.0 "cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0") 00:02:34.705 C linker for the host machine: cc ld.bfd 2.38 00:02:34.705 Host machine cpu family: x86_64 00:02:34.705 Host machine cpu: x86_64 00:02:34.705 Message: ## Building in Developer Mode ## 00:02:34.705 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:34.705 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:34.705 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:34.705 Program python3 found: YES (/usr/bin/python3) 00:02:34.705 Program cat found: YES (/usr/bin/cat) 00:02:34.705 Compiler for C supports arguments -march=native: YES 00:02:34.705 Checking for size of "void *" : 8 00:02:34.705 Checking for size of "void *" : 8 (cached) 00:02:34.705 Library m found: YES 00:02:34.705 Library numa found: YES 00:02:34.705 Has header "numaif.h" : YES 00:02:34.705 Library fdt found: NO 00:02:34.705 Library execinfo found: NO 00:02:34.705 Has header "execinfo.h" : YES 00:02:34.705 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2 00:02:34.705 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:34.705 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:34.705 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:34.705 Run-time dependency openssl found: YES 3.0.2 00:02:34.705 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:34.705 Library pcap found: NO 00:02:34.705 Compiler for C supports arguments -Wcast-qual: YES 00:02:34.705 Compiler for C supports arguments -Wdeprecated: YES 00:02:34.705 Compiler for C supports arguments -Wformat: YES 00:02:34.705 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:34.705 Compiler for C supports arguments -Wformat-security: YES 00:02:34.705 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:34.705 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:34.705 Compiler for C supports arguments -Wnested-externs: YES 00:02:34.705 Compiler for C supports arguments -Wold-style-definition: YES 00:02:34.705 Compiler for C supports arguments -Wpointer-arith: YES 00:02:34.705 Compiler for C supports arguments -Wsign-compare: YES 00:02:34.705 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:34.705 Compiler for C supports arguments -Wundef: YES 00:02:34.705 Compiler for C supports arguments -Wwrite-strings: YES 00:02:34.705 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:34.705 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:34.705 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:34.705 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:34.705 Program objdump found: YES (/usr/bin/objdump) 00:02:34.705 Compiler for C supports arguments -mavx512f: YES 00:02:34.705 Checking if "AVX512 checking" compiles: YES 00:02:34.705 Fetching value of define "__SSE4_2__" : 1 00:02:34.705 Fetching value of define "__AES__" : 1 00:02:34.705 Fetching value of define "__AVX__" : 1 00:02:34.705 Fetching value of define "__AVX2__" : 1 00:02:34.705 Fetching value of define "__AVX512BW__" : (undefined) 00:02:34.705 Fetching value of define "__AVX512CD__" : (undefined) 00:02:34.705 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:34.705 Fetching value of define "__AVX512F__" : (undefined) 00:02:34.705 Fetching value of define "__AVX512VL__" : (undefined) 00:02:34.705 Fetching value of define "__PCLMUL__" : 1 00:02:34.705 Fetching value of define "__RDRND__" : 1 00:02:34.705 Fetching value of define "__RDSEED__" : 1 00:02:34.705 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:34.705 Fetching value of define "__znver1__" : (undefined) 00:02:34.705 Fetching value of define "__znver2__" : (undefined) 00:02:34.705 Fetching value of define "__znver3__" : (undefined) 00:02:34.705 Fetching value of define "__znver4__" : (undefined) 00:02:34.705 Library asan found: YES 00:02:34.705 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:34.705 Message: lib/log: Defining dependency "log" 00:02:34.705 Message: lib/kvargs: Defining dependency "kvargs" 00:02:34.705 Message: lib/telemetry: Defining dependency "telemetry" 00:02:34.705 Library rt found: YES 00:02:34.705 Checking for function "getentropy" : NO 00:02:34.705 Message: lib/eal: Defining dependency "eal" 00:02:34.705 Message: lib/ring: Defining dependency "ring" 00:02:34.705 Message: lib/rcu: Defining dependency "rcu" 00:02:34.705 Message: lib/mempool: Defining dependency "mempool" 00:02:34.705 Message: lib/mbuf: Defining dependency "mbuf" 00:02:34.705 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:34.705 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:34.705 Compiler for C supports arguments -mpclmul: YES 00:02:34.705 Compiler for C supports arguments -maes: YES 00:02:34.705 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:34.705 Compiler for C supports arguments -mavx512bw: YES 00:02:34.705 Compiler for C supports arguments -mavx512dq: YES 00:02:34.705 Compiler for C supports arguments -mavx512vl: YES 00:02:34.705 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:34.705 Compiler for C supports arguments -mavx2: YES 00:02:34.705 Compiler for C supports arguments -mavx: YES 00:02:34.705 Message: lib/net: Defining dependency "net" 00:02:34.705 Message: lib/meter: Defining dependency "meter" 00:02:34.705 Message: lib/ethdev: Defining dependency "ethdev" 00:02:34.705 Message: lib/pci: Defining dependency "pci" 00:02:34.705 Message: lib/cmdline: Defining dependency "cmdline" 00:02:34.705 Message: lib/hash: Defining dependency "hash" 00:02:34.705 Message: lib/timer: Defining dependency "timer" 00:02:34.705 Message: lib/compressdev: Defining dependency "compressdev" 00:02:34.705 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:34.705 Message: lib/dmadev: Defining dependency "dmadev" 00:02:34.705 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:34.705 Message: lib/power: Defining dependency "power" 00:02:34.705 Message: lib/reorder: Defining dependency "reorder" 00:02:34.705 Message: lib/security: Defining dependency "security" 00:02:34.705 Has header "linux/userfaultfd.h" : YES 00:02:34.705 Has header "linux/vduse.h" : YES 00:02:34.706 Message: lib/vhost: Defining dependency "vhost" 00:02:34.706 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:34.706 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:34.706 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:34.706 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:34.706 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:34.706 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:34.706 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:34.706 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:34.706 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:34.706 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:34.706 Program doxygen found: YES (/usr/bin/doxygen) 00:02:34.706 Configuring doxy-api-html.conf using configuration 00:02:34.706 Configuring doxy-api-man.conf using configuration 00:02:34.706 Program mandb found: YES (/usr/bin/mandb) 00:02:34.706 Program sphinx-build found: NO 00:02:34.706 Configuring rte_build_config.h using configuration 00:02:34.706 Message: 00:02:34.706 ================= 00:02:34.706 Applications Enabled 00:02:34.706 ================= 00:02:34.706 00:02:34.706 apps: 00:02:34.706 00:02:34.706 00:02:34.706 Message: 00:02:34.706 ================= 00:02:34.706 Libraries Enabled 00:02:34.706 ================= 00:02:34.706 00:02:34.706 libs: 00:02:34.706 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:34.706 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:34.706 cryptodev, dmadev, power, reorder, security, vhost, 00:02:34.706 00:02:34.706 Message: 00:02:34.706 =============== 00:02:34.706 Drivers Enabled 00:02:34.706 =============== 00:02:34.706 00:02:34.706 common: 00:02:34.706 00:02:34.706 bus: 00:02:34.706 pci, vdev, 00:02:34.706 mempool: 00:02:34.706 ring, 00:02:34.706 dma: 00:02:34.706 00:02:34.706 net: 00:02:34.706 00:02:34.706 crypto: 00:02:34.706 00:02:34.706 compress: 00:02:34.706 00:02:34.706 vdpa: 00:02:34.706 00:02:34.706 00:02:34.706 Message: 00:02:34.706 ================= 00:02:34.706 Content Skipped 00:02:34.706 ================= 00:02:34.706 00:02:34.706 apps: 00:02:34.706 dumpcap: explicitly disabled via build config 00:02:34.706 graph: explicitly disabled via build config 00:02:34.706 pdump: explicitly disabled via build config 00:02:34.706 proc-info: explicitly disabled via build config 00:02:34.706 test-acl: explicitly disabled via build config 00:02:34.706 test-bbdev: explicitly disabled via build config 00:02:34.706 test-cmdline: explicitly disabled via build config 00:02:34.706 test-compress-perf: explicitly disabled via build config 00:02:34.706 test-crypto-perf: explicitly disabled via build config 00:02:34.706 test-dma-perf: explicitly disabled via build config 00:02:34.706 test-eventdev: explicitly disabled via build config 00:02:34.706 test-fib: explicitly disabled via build config 00:02:34.706 test-flow-perf: explicitly disabled via build config 00:02:34.706 test-gpudev: explicitly disabled via build config 00:02:34.706 test-mldev: explicitly disabled via build config 00:02:34.706 test-pipeline: explicitly disabled via build config 00:02:34.706 test-pmd: explicitly disabled via build config 00:02:34.706 test-regex: explicitly disabled via build config 00:02:34.706 test-sad: explicitly disabled via build config 00:02:34.706 test-security-perf: explicitly disabled via build config 00:02:34.706 00:02:34.706 libs: 00:02:34.706 metrics: explicitly disabled via build config 00:02:34.706 acl: explicitly disabled via build config 00:02:34.706 bbdev: explicitly disabled via build config 00:02:34.706 bitratestats: explicitly disabled via build config 00:02:34.706 bpf: explicitly disabled via build config 00:02:34.706 cfgfile: explicitly disabled via build config 00:02:34.706 distributor: explicitly disabled via build config 00:02:34.706 efd: explicitly disabled via build config 00:02:34.706 eventdev: explicitly disabled via build config 00:02:34.706 dispatcher: explicitly disabled via build config 00:02:34.706 gpudev: explicitly disabled via build config 00:02:34.706 gro: explicitly disabled via build config 00:02:34.706 gso: explicitly disabled via build config 00:02:34.706 ip_frag: explicitly disabled via build config 00:02:34.706 jobstats: explicitly disabled via build config 00:02:34.706 latencystats: explicitly disabled via build config 00:02:34.706 lpm: explicitly disabled via build config 00:02:34.706 member: explicitly disabled via build config 00:02:34.706 pcapng: explicitly disabled via build config 00:02:34.706 rawdev: explicitly disabled via build config 00:02:34.706 regexdev: explicitly disabled via build config 00:02:34.706 mldev: explicitly disabled via build config 00:02:34.706 rib: explicitly disabled via build config 00:02:34.706 sched: explicitly disabled via build config 00:02:34.706 stack: explicitly disabled via build config 00:02:34.706 ipsec: explicitly disabled via build config 00:02:34.706 pdcp: explicitly disabled via build config 00:02:34.706 fib: explicitly disabled via build config 00:02:34.706 port: explicitly disabled via build config 00:02:34.706 pdump: explicitly disabled via build config 00:02:34.706 table: explicitly disabled via build config 00:02:34.706 pipeline: explicitly disabled via build config 00:02:34.706 graph: explicitly disabled via build config 00:02:34.706 node: explicitly disabled via build config 00:02:34.706 00:02:34.706 drivers: 00:02:34.706 common/cpt: not in enabled drivers build config 00:02:34.706 common/dpaax: not in enabled drivers build config 00:02:34.706 common/iavf: not in enabled drivers build config 00:02:34.706 common/idpf: not in enabled drivers build config 00:02:34.706 common/mvep: not in enabled drivers build config 00:02:34.706 common/octeontx: not in enabled drivers build config 00:02:34.706 bus/auxiliary: not in enabled drivers build config 00:02:34.706 bus/cdx: not in enabled drivers build config 00:02:34.706 bus/dpaa: not in enabled drivers build config 00:02:34.706 bus/fslmc: not in enabled drivers build config 00:02:34.706 bus/ifpga: not in enabled drivers build config 00:02:34.706 bus/platform: not in enabled drivers build config 00:02:34.706 bus/vmbus: not in enabled drivers build config 00:02:34.706 common/cnxk: not in enabled drivers build config 00:02:34.706 common/mlx5: not in enabled drivers build config 00:02:34.706 common/nfp: not in enabled drivers build config 00:02:34.706 common/qat: not in enabled drivers build config 00:02:34.706 common/sfc_efx: not in enabled drivers build config 00:02:34.706 mempool/bucket: not in enabled drivers build config 00:02:34.706 mempool/cnxk: not in enabled drivers build config 00:02:34.706 mempool/dpaa: not in enabled drivers build config 00:02:34.706 mempool/dpaa2: not in enabled drivers build config 00:02:34.706 mempool/octeontx: not in enabled drivers build config 00:02:34.706 mempool/stack: not in enabled drivers build config 00:02:34.706 dma/cnxk: not in enabled drivers build config 00:02:34.706 dma/dpaa: not in enabled drivers build config 00:02:34.706 dma/dpaa2: not in enabled drivers build config 00:02:34.706 dma/hisilicon: not in enabled drivers build config 00:02:34.706 dma/idxd: not in enabled drivers build config 00:02:34.706 dma/ioat: not in enabled drivers build config 00:02:34.706 dma/skeleton: not in enabled drivers build config 00:02:34.706 net/af_packet: not in enabled drivers build config 00:02:34.706 net/af_xdp: not in enabled drivers build config 00:02:34.706 net/ark: not in enabled drivers build config 00:02:34.706 net/atlantic: not in enabled drivers build config 00:02:34.706 net/avp: not in enabled drivers build config 00:02:34.706 net/axgbe: not in enabled drivers build config 00:02:34.706 net/bnx2x: not in enabled drivers build config 00:02:34.706 net/bnxt: not in enabled drivers build config 00:02:34.706 net/bonding: not in enabled drivers build config 00:02:34.706 net/cnxk: not in enabled drivers build config 00:02:34.706 net/cpfl: not in enabled drivers build config 00:02:34.706 net/cxgbe: not in enabled drivers build config 00:02:34.706 net/dpaa: not in enabled drivers build config 00:02:34.706 net/dpaa2: not in enabled drivers build config 00:02:34.706 net/e1000: not in enabled drivers build config 00:02:34.706 net/ena: not in enabled drivers build config 00:02:34.706 net/enetc: not in enabled drivers build config 00:02:34.706 net/enetfec: not in enabled drivers build config 00:02:34.706 net/enic: not in enabled drivers build config 00:02:34.706 net/failsafe: not in enabled drivers build config 00:02:34.706 net/fm10k: not in enabled drivers build config 00:02:34.706 net/gve: not in enabled drivers build config 00:02:34.706 net/hinic: not in enabled drivers build config 00:02:34.706 net/hns3: not in enabled drivers build config 00:02:34.706 net/i40e: not in enabled drivers build config 00:02:34.706 net/iavf: not in enabled drivers build config 00:02:34.706 net/ice: not in enabled drivers build config 00:02:34.706 net/idpf: not in enabled drivers build config 00:02:34.706 net/igc: not in enabled drivers build config 00:02:34.706 net/ionic: not in enabled drivers build config 00:02:34.706 net/ipn3ke: not in enabled drivers build config 00:02:34.706 net/ixgbe: not in enabled drivers build config 00:02:34.706 net/mana: not in enabled drivers build config 00:02:34.706 net/memif: not in enabled drivers build config 00:02:34.706 net/mlx4: not in enabled drivers build config 00:02:34.706 net/mlx5: not in enabled drivers build config 00:02:34.706 net/mvneta: not in enabled drivers build config 00:02:34.706 net/mvpp2: not in enabled drivers build config 00:02:34.706 net/netvsc: not in enabled drivers build config 00:02:34.706 net/nfb: not in enabled drivers build config 00:02:34.706 net/nfp: not in enabled drivers build config 00:02:34.706 net/ngbe: not in enabled drivers build config 00:02:34.706 net/null: not in enabled drivers build config 00:02:34.706 net/octeontx: not in enabled drivers build config 00:02:34.706 net/octeon_ep: not in enabled drivers build config 00:02:34.706 net/pcap: not in enabled drivers build config 00:02:34.706 net/pfe: not in enabled drivers build config 00:02:34.706 net/qede: not in enabled drivers build config 00:02:34.706 net/ring: not in enabled drivers build config 00:02:34.706 net/sfc: not in enabled drivers build config 00:02:34.706 net/softnic: not in enabled drivers build config 00:02:34.706 net/tap: not in enabled drivers build config 00:02:34.706 net/thunderx: not in enabled drivers build config 00:02:34.706 net/txgbe: not in enabled drivers build config 00:02:34.706 net/vdev_netvsc: not in enabled drivers build config 00:02:34.706 net/vhost: not in enabled drivers build config 00:02:34.706 net/virtio: not in enabled drivers build config 00:02:34.706 net/vmxnet3: not in enabled drivers build config 00:02:34.706 raw/*: missing internal dependency, "rawdev" 00:02:34.706 crypto/armv8: not in enabled drivers build config 00:02:34.706 crypto/bcmfs: not in enabled drivers build config 00:02:34.706 crypto/caam_jr: not in enabled drivers build config 00:02:34.706 crypto/ccp: not in enabled drivers build config 00:02:34.706 crypto/cnxk: not in enabled drivers build config 00:02:34.706 crypto/dpaa_sec: not in enabled drivers build config 00:02:34.706 crypto/dpaa2_sec: not in enabled drivers build config 00:02:34.706 crypto/ipsec_mb: not in enabled drivers build config 00:02:34.706 crypto/mlx5: not in enabled drivers build config 00:02:34.706 crypto/mvsam: not in enabled drivers build config 00:02:34.706 crypto/nitrox: not in enabled drivers build config 00:02:34.706 crypto/null: not in enabled drivers build config 00:02:34.706 crypto/octeontx: not in enabled drivers build config 00:02:34.706 crypto/openssl: not in enabled drivers build config 00:02:34.706 crypto/scheduler: not in enabled drivers build config 00:02:34.706 crypto/uadk: not in enabled drivers build config 00:02:34.706 crypto/virtio: not in enabled drivers build config 00:02:34.706 compress/isal: not in enabled drivers build config 00:02:34.706 compress/mlx5: not in enabled drivers build config 00:02:34.706 compress/octeontx: not in enabled drivers build config 00:02:34.706 compress/zlib: not in enabled drivers build config 00:02:34.707 regex/*: missing internal dependency, "regexdev" 00:02:34.707 ml/*: missing internal dependency, "mldev" 00:02:34.707 vdpa/ifc: not in enabled drivers build config 00:02:34.707 vdpa/mlx5: not in enabled drivers build config 00:02:34.707 vdpa/nfp: not in enabled drivers build config 00:02:34.707 vdpa/sfc: not in enabled drivers build config 00:02:34.707 event/*: missing internal dependency, "eventdev" 00:02:34.707 baseband/*: missing internal dependency, "bbdev" 00:02:34.707 gpu/*: missing internal dependency, "gpudev" 00:02:34.707 00:02:34.707 00:02:34.707 Build targets in project: 85 00:02:34.707 00:02:34.707 DPDK 23.11.0 00:02:34.707 00:02:34.707 User defined options 00:02:34.707 buildtype : debug 00:02:34.707 default_library : static 00:02:34.707 libdir : lib 00:02:34.707 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:34.707 b_sanitize : address 00:02:34.707 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon 00:02:34.707 c_link_args : 00:02:34.707 cpu_instruction_set: native 00:02:34.707 disable_apps : test-eventdev,test-compress-perf,pdump,test-crypto-perf,test-pmd,test-flow-perf,test-acl,test-sad,graph,proc-info,test-bbdev,test-mldev,test-gpudev,test-fib,test-cmdline,test-security-perf,dumpcap,test-pipeline,test,test-regex,test-dma-perf 00:02:34.707 disable_libs : node,lpm,acl,pdump,cfgfile,efd,latencystats,distributor,bbdev,eventdev,port,bitratestats,pdcp,bpf,graph,member,mldev,stack,pcapng,gro,fib,table,regexdev,dispatcher,sched,ipsec,metrics,gso,jobstats,pipeline,rib,ip_frag,rawdev,gpudev 00:02:34.707 enable_docs : false 00:02:34.707 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:34.707 enable_kmods : false 00:02:34.707 tests : false 00:02:34.707 00:02:34.707 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:34.707 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:34.707 [1/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:34.707 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:34.707 [3/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:34.707 [4/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:34.707 [5/265] Linking static target lib/librte_kvargs.a 00:02:34.707 [6/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:34.707 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:34.707 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:34.707 [9/265] Linking static target lib/librte_log.a 00:02:34.707 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:34.965 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:34.965 [12/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:34.965 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:34.965 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:34.965 [15/265] Linking static target lib/librte_telemetry.a 00:02:35.224 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:35.224 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:35.224 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:35.482 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:35.482 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:35.482 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:35.482 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:35.482 [23/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.482 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:35.740 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:35.740 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:35.740 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:36.000 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:36.000 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:36.000 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:36.000 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:36.259 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:36.259 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:36.259 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:36.259 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:36.259 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:36.259 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:36.259 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:36.259 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:36.517 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:36.517 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:36.517 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:36.776 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:36.776 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:36.776 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:36.776 [46/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:36.776 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:36.776 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:37.035 [49/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:37.035 [50/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.035 [51/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.035 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:37.035 [53/265] Linking target lib/librte_log.so.24.0 00:02:37.035 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:37.035 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:37.294 [56/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:37.294 [57/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:37.294 [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:37.294 [59/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:37.294 [60/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:37.294 [61/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:37.294 [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:37.294 [63/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:37.553 [64/265] Linking target lib/librte_kvargs.so.24.0 00:02:37.553 [65/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:37.553 [66/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:37.553 [67/265] Linking target lib/librte_telemetry.so.24.0 00:02:37.553 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:37.553 [69/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:37.553 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:37.553 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:37.812 [72/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:37.812 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:37.812 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:37.812 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:37.812 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:37.812 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:38.071 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:38.071 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:38.071 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:38.071 [81/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:38.071 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:38.330 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:38.330 [84/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:38.330 [85/265] Linking static target lib/librte_eal.a 00:02:38.330 [86/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:38.589 [87/265] Linking static target lib/librte_ring.a 00:02:38.589 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:38.589 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:38.589 [90/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:38.589 [91/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:38.589 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:38.589 [93/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:38.847 [94/265] Linking static target lib/librte_rcu.a 00:02:38.847 [95/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:38.847 [96/265] Linking static target lib/librte_mempool.a 00:02:38.847 [97/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:38.847 [98/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:39.106 [99/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:39.106 [100/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:39.106 [101/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.106 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:39.106 [103/265] Linking static target lib/librte_mbuf.a 00:02:39.106 [104/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:39.364 [105/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:39.364 [106/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.364 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:39.365 [108/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:39.365 [109/265] Linking static target lib/librte_net.a 00:02:39.624 [110/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:39.624 [111/265] Linking static target lib/librte_meter.a 00:02:39.624 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:39.883 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:39.883 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:39.883 [115/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.883 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:39.883 [117/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.153 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:40.153 [119/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.434 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:40.434 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:40.434 [122/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.434 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:40.693 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:40.693 [125/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:40.693 [126/265] Linking static target lib/librte_pci.a 00:02:40.693 [127/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:40.693 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:40.693 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:40.693 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:40.693 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:40.693 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:40.693 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:40.958 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:40.958 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:40.958 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:40.958 [137/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.958 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:40.958 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:40.958 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:40.958 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:40.958 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:40.958 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:40.958 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:40.958 [145/265] Linking static target lib/librte_cmdline.a 00:02:41.217 [146/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:41.476 [147/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:41.476 [148/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:41.476 [149/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:41.476 [150/265] Linking static target lib/librte_timer.a 00:02:41.476 [151/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:41.476 [152/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:41.734 [153/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.734 [154/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:41.734 [155/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:41.734 [156/265] Linking static target lib/librte_compressdev.a 00:02:41.734 [157/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:41.734 [158/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:41.734 [159/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:41.994 [160/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:41.994 [161/265] Linking static target lib/librte_hash.a 00:02:41.994 [162/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:41.994 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:41.994 [164/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:41.994 [165/265] Linking static target lib/librte_dmadev.a 00:02:41.994 [166/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.994 [167/265] Linking static target lib/librte_ethdev.a 00:02:41.994 [168/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:42.252 [169/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:42.252 [170/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:42.252 [171/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:42.252 [172/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.252 [173/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.510 [174/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.510 [175/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:42.510 [176/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:42.510 [177/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:42.510 [178/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:42.510 [179/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:42.769 [180/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:42.769 [181/265] Linking static target lib/librte_cryptodev.a 00:02:42.769 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:42.769 [183/265] Linking static target lib/librte_power.a 00:02:43.027 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:43.027 [185/265] Linking static target lib/librte_reorder.a 00:02:43.027 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:43.027 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:43.027 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:43.285 [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:43.285 [190/265] Linking static target lib/librte_security.a 00:02:43.285 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.285 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:43.285 [193/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.543 [194/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.543 [195/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:43.802 [196/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:43.802 [197/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:43.802 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:43.802 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:44.061 [200/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.061 [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:44.061 [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:44.061 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:44.319 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:44.319 [205/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:44.319 [206/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:44.319 [207/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:44.319 [208/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:44.584 [209/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:44.584 [210/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:44.584 [211/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:44.584 [212/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:44.584 [213/265] Linking static target drivers/librte_bus_pci.a 00:02:44.584 [214/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:44.584 [215/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:44.584 [216/265] Linking static target drivers/librte_bus_vdev.a 00:02:44.584 [217/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:44.584 [218/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:44.846 [219/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:44.846 [220/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:44.846 [221/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:44.846 [222/265] Linking static target drivers/librte_mempool_ring.a 00:02:44.846 [223/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.104 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.038 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:47.414 [226/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.414 [227/265] Linking target lib/librte_eal.so.24.0 00:02:47.414 [228/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:47.414 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.414 [230/265] Linking target lib/librte_meter.so.24.0 00:02:47.414 [231/265] Linking target lib/librte_ring.so.24.0 00:02:47.414 [232/265] Linking target lib/librte_timer.so.24.0 00:02:47.414 [233/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:47.414 [234/265] Linking target lib/librte_pci.so.24.0 00:02:47.414 [235/265] Linking target lib/librte_dmadev.so.24.0 00:02:47.672 [236/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:47.672 [237/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:47.672 [238/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:47.672 [239/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:47.672 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:47.672 [241/265] Linking target lib/librte_rcu.so.24.0 00:02:47.672 [242/265] Linking target lib/librte_mempool.so.24.0 00:02:47.672 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:47.672 [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:47.931 [245/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:47.931 [246/265] Linking target lib/librte_mbuf.so.24.0 00:02:47.931 [247/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:47.931 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:47.931 [249/265] Linking target lib/librte_reorder.so.24.0 00:02:47.931 [250/265] Linking target lib/librte_compressdev.so.24.0 00:02:47.931 [251/265] Linking target lib/librte_net.so.24.0 00:02:47.931 [252/265] Linking target lib/librte_cryptodev.so.24.0 00:02:48.189 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:48.189 [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:48.189 [255/265] Linking target lib/librte_hash.so.24.0 00:02:48.189 [256/265] Linking target lib/librte_cmdline.so.24.0 00:02:48.189 [257/265] Linking target lib/librte_security.so.24.0 00:02:48.189 [258/265] Linking target lib/librte_ethdev.so.24.0 00:02:48.447 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:48.447 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:48.447 [261/265] Linking target lib/librte_power.so.24.0 00:02:49.381 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:49.381 [263/265] Linking static target lib/librte_vhost.a 00:02:51.304 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.304 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:51.304 INFO: autodetecting backend as ninja 00:02:51.304 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:51.872 CC lib/ut_mock/mock.o 00:02:51.872 CC lib/log/log.o 00:02:51.872 CC lib/log/log_flags.o 00:02:51.872 CC lib/log/log_deprecated.o 00:02:51.872 CC lib/ut/ut.o 00:02:52.131 LIB libspdk_ut_mock.a 00:02:52.131 LIB libspdk_ut.a 00:02:52.131 LIB libspdk_log.a 00:02:52.390 CC lib/ioat/ioat.o 00:02:52.390 CC lib/util/base64.o 00:02:52.390 CC lib/util/bit_array.o 00:02:52.390 CC lib/util/cpuset.o 00:02:52.390 CC lib/util/crc16.o 00:02:52.390 CC lib/util/crc32.o 00:02:52.390 CC lib/util/crc32c.o 00:02:52.390 CXX lib/trace_parser/trace.o 00:02:52.390 CC lib/dma/dma.o 00:02:52.390 CC lib/vfio_user/host/vfio_user_pci.o 00:02:52.390 CC lib/vfio_user/host/vfio_user.o 00:02:52.390 CC lib/util/crc32_ieee.o 00:02:52.390 CC lib/util/crc64.o 00:02:52.390 CC lib/util/dif.o 00:02:52.649 LIB libspdk_dma.a 00:02:52.649 CC lib/util/fd.o 00:02:52.649 CC lib/util/file.o 00:02:52.649 CC lib/util/hexlify.o 00:02:52.649 CC lib/util/iov.o 00:02:52.649 CC lib/util/math.o 00:02:52.649 LIB libspdk_ioat.a 00:02:52.649 CC lib/util/pipe.o 00:02:52.649 CC lib/util/strerror_tls.o 00:02:52.649 LIB libspdk_vfio_user.a 00:02:52.649 CC lib/util/string.o 00:02:52.649 CC lib/util/uuid.o 00:02:52.649 CC lib/util/fd_group.o 00:02:52.649 CC lib/util/xor.o 00:02:52.908 CC lib/util/zipf.o 00:02:53.167 LIB libspdk_util.a 00:02:53.426 CC lib/idxd/idxd.o 00:02:53.426 CC lib/idxd/idxd_user.o 00:02:53.426 CC lib/vmd/vmd.o 00:02:53.426 CC lib/vmd/led.o 00:02:53.426 CC lib/conf/conf.o 00:02:53.426 CC lib/rdma/common.o 00:02:53.426 CC lib/rdma/rdma_verbs.o 00:02:53.426 CC lib/json/json_parse.o 00:02:53.426 LIB libspdk_trace_parser.a 00:02:53.426 CC lib/env_dpdk/env.o 00:02:53.426 CC lib/env_dpdk/memory.o 00:02:53.426 CC lib/env_dpdk/pci.o 00:02:53.426 CC lib/env_dpdk/init.o 00:02:53.426 LIB libspdk_conf.a 00:02:53.426 CC lib/json/json_util.o 00:02:53.426 CC lib/env_dpdk/threads.o 00:02:53.685 CC lib/env_dpdk/pci_ioat.o 00:02:53.685 LIB libspdk_rdma.a 00:02:53.685 CC lib/json/json_write.o 00:02:53.685 CC lib/env_dpdk/pci_virtio.o 00:02:53.685 CC lib/env_dpdk/pci_vmd.o 00:02:53.685 CC lib/env_dpdk/pci_idxd.o 00:02:53.944 CC lib/env_dpdk/pci_event.o 00:02:53.944 CC lib/env_dpdk/sigbus_handler.o 00:02:53.944 CC lib/env_dpdk/pci_dpdk.o 00:02:53.944 LIB libspdk_idxd.a 00:02:53.944 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:53.944 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:53.944 LIB libspdk_json.a 00:02:54.203 LIB libspdk_vmd.a 00:02:54.203 CC lib/jsonrpc/jsonrpc_server.o 00:02:54.203 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:54.203 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:54.203 CC lib/jsonrpc/jsonrpc_client.o 00:02:54.464 LIB libspdk_jsonrpc.a 00:02:54.464 CC lib/rpc/rpc.o 00:02:54.724 LIB libspdk_rpc.a 00:02:54.724 LIB libspdk_env_dpdk.a 00:02:54.982 CC lib/sock/sock.o 00:02:54.982 CC lib/trace/trace.o 00:02:54.982 CC lib/sock/sock_rpc.o 00:02:54.982 CC lib/trace/trace_flags.o 00:02:54.982 CC lib/trace/trace_rpc.o 00:02:54.982 CC lib/notify/notify.o 00:02:54.982 CC lib/notify/notify_rpc.o 00:02:54.982 LIB libspdk_notify.a 00:02:55.242 LIB libspdk_trace.a 00:02:55.242 LIB libspdk_sock.a 00:02:55.242 CC lib/thread/thread.o 00:02:55.242 CC lib/thread/iobuf.o 00:02:55.501 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:55.501 CC lib/nvme/nvme_ctrlr.o 00:02:55.501 CC lib/nvme/nvme_ns_cmd.o 00:02:55.501 CC lib/nvme/nvme_fabric.o 00:02:55.501 CC lib/nvme/nvme_ns.o 00:02:55.501 CC lib/nvme/nvme_pcie.o 00:02:55.501 CC lib/nvme/nvme_pcie_common.o 00:02:55.501 CC lib/nvme/nvme_qpair.o 00:02:55.760 CC lib/nvme/nvme.o 00:02:56.019 CC lib/nvme/nvme_quirks.o 00:02:56.019 CC lib/nvme/nvme_transport.o 00:02:56.019 CC lib/nvme/nvme_discovery.o 00:02:56.278 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:56.278 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:56.278 CC lib/nvme/nvme_tcp.o 00:02:56.537 CC lib/nvme/nvme_opal.o 00:02:56.537 CC lib/nvme/nvme_io_msg.o 00:02:56.537 CC lib/nvme/nvme_poll_group.o 00:02:56.537 CC lib/nvme/nvme_zns.o 00:02:56.537 CC lib/nvme/nvme_cuse.o 00:02:56.796 CC lib/nvme/nvme_vfio_user.o 00:02:56.796 CC lib/nvme/nvme_rdma.o 00:02:57.054 LIB libspdk_thread.a 00:02:57.054 CC lib/blob/blobstore.o 00:02:57.054 CC lib/blob/request.o 00:02:57.054 CC lib/accel/accel.o 00:02:57.054 CC lib/blob/zeroes.o 00:02:57.054 CC lib/init/json_config.o 00:02:57.054 CC lib/init/subsystem.o 00:02:57.312 CC lib/init/subsystem_rpc.o 00:02:57.312 CC lib/accel/accel_rpc.o 00:02:57.312 CC lib/accel/accel_sw.o 00:02:57.312 CC lib/init/rpc.o 00:02:57.312 CC lib/blob/blob_bs_dev.o 00:02:57.570 LIB libspdk_init.a 00:02:57.570 CC lib/virtio/virtio.o 00:02:57.570 CC lib/virtio/virtio_vhost_user.o 00:02:57.570 CC lib/virtio/virtio_vfio_user.o 00:02:57.570 CC lib/virtio/virtio_pci.o 00:02:57.829 CC lib/event/app.o 00:02:57.829 CC lib/event/reactor.o 00:02:57.829 CC lib/event/log_rpc.o 00:02:57.829 CC lib/event/app_rpc.o 00:02:57.829 CC lib/event/scheduler_static.o 00:02:57.829 LIB libspdk_virtio.a 00:02:58.086 LIB libspdk_nvme.a 00:02:58.086 LIB libspdk_accel.a 00:02:58.086 LIB libspdk_event.a 00:02:58.345 CC lib/bdev/bdev.o 00:02:58.345 CC lib/bdev/bdev_rpc.o 00:02:58.345 CC lib/bdev/bdev_zone.o 00:02:58.345 CC lib/bdev/part.o 00:02:58.345 CC lib/bdev/scsi_nvme.o 00:03:00.247 LIB libspdk_blob.a 00:03:00.247 CC lib/blobfs/blobfs.o 00:03:00.247 CC lib/blobfs/tree.o 00:03:00.247 CC lib/lvol/lvol.o 00:03:00.815 LIB libspdk_bdev.a 00:03:01.074 CC lib/nvmf/ctrlr.o 00:03:01.074 CC lib/nbd/nbd_rpc.o 00:03:01.074 CC lib/nbd/nbd.o 00:03:01.074 CC lib/nvmf/ctrlr_discovery.o 00:03:01.074 CC lib/nvmf/ctrlr_bdev.o 00:03:01.074 CC lib/nvmf/subsystem.o 00:03:01.074 CC lib/scsi/dev.o 00:03:01.074 LIB libspdk_blobfs.a 00:03:01.074 CC lib/ftl/ftl_core.o 00:03:01.074 CC lib/ftl/ftl_init.o 00:03:01.074 LIB libspdk_lvol.a 00:03:01.333 CC lib/ftl/ftl_layout.o 00:03:01.333 CC lib/ftl/ftl_debug.o 00:03:01.333 CC lib/scsi/lun.o 00:03:01.333 CC lib/scsi/port.o 00:03:01.592 CC lib/scsi/scsi.o 00:03:01.592 LIB libspdk_nbd.a 00:03:01.592 CC lib/nvmf/nvmf.o 00:03:01.592 CC lib/ftl/ftl_io.o 00:03:01.592 CC lib/ftl/ftl_sb.o 00:03:01.592 CC lib/ftl/ftl_l2p.o 00:03:01.592 CC lib/ftl/ftl_l2p_flat.o 00:03:01.592 CC lib/ftl/ftl_nv_cache.o 00:03:01.592 CC lib/scsi/scsi_bdev.o 00:03:01.852 CC lib/scsi/scsi_pr.o 00:03:01.852 CC lib/scsi/scsi_rpc.o 00:03:01.852 CC lib/scsi/task.o 00:03:01.852 CC lib/nvmf/nvmf_rpc.o 00:03:01.852 CC lib/nvmf/transport.o 00:03:01.852 CC lib/nvmf/tcp.o 00:03:02.111 CC lib/nvmf/rdma.o 00:03:02.111 CC lib/ftl/ftl_band.o 00:03:02.111 CC lib/ftl/ftl_band_ops.o 00:03:02.111 LIB libspdk_scsi.a 00:03:02.370 CC lib/ftl/ftl_writer.o 00:03:02.630 CC lib/ftl/ftl_rq.o 00:03:02.630 CC lib/iscsi/conn.o 00:03:02.630 CC lib/iscsi/init_grp.o 00:03:02.630 CC lib/vhost/vhost.o 00:03:02.630 CC lib/iscsi/iscsi.o 00:03:02.630 CC lib/ftl/ftl_reloc.o 00:03:02.630 CC lib/vhost/vhost_rpc.o 00:03:02.630 CC lib/iscsi/md5.o 00:03:02.890 CC lib/iscsi/param.o 00:03:02.890 CC lib/iscsi/portal_grp.o 00:03:02.890 CC lib/ftl/ftl_l2p_cache.o 00:03:03.156 CC lib/ftl/ftl_p2l.o 00:03:03.156 CC lib/vhost/vhost_scsi.o 00:03:03.156 CC lib/iscsi/tgt_node.o 00:03:03.156 CC lib/iscsi/iscsi_subsystem.o 00:03:03.156 CC lib/vhost/vhost_blk.o 00:03:03.156 CC lib/vhost/rte_vhost_user.o 00:03:03.414 CC lib/iscsi/iscsi_rpc.o 00:03:03.414 CC lib/ftl/mngt/ftl_mngt.o 00:03:03.674 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:03.674 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:03.674 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:03.674 CC lib/iscsi/task.o 00:03:03.933 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:03.933 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:03.933 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:03.933 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:03.933 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:03.933 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:03.933 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:04.192 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:04.192 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:04.192 CC lib/ftl/utils/ftl_conf.o 00:03:04.192 CC lib/ftl/utils/ftl_md.o 00:03:04.192 LIB libspdk_iscsi.a 00:03:04.192 CC lib/ftl/utils/ftl_mempool.o 00:03:04.192 LIB libspdk_vhost.a 00:03:04.192 CC lib/ftl/utils/ftl_bitmap.o 00:03:04.192 CC lib/ftl/utils/ftl_property.o 00:03:04.192 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:04.192 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:04.192 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:04.192 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:04.451 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:04.451 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:04.451 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:04.451 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:04.451 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:04.451 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:04.451 CC lib/ftl/base/ftl_base_dev.o 00:03:04.451 CC lib/ftl/base/ftl_base_bdev.o 00:03:04.451 CC lib/ftl/ftl_trace.o 00:03:04.710 LIB libspdk_nvmf.a 00:03:04.710 LIB libspdk_ftl.a 00:03:05.278 CC module/env_dpdk/env_dpdk_rpc.o 00:03:05.278 CC module/blob/bdev/blob_bdev.o 00:03:05.278 CC module/accel/ioat/accel_ioat.o 00:03:05.278 CC module/accel/dsa/accel_dsa.o 00:03:05.278 CC module/accel/iaa/accel_iaa.o 00:03:05.278 CC module/scheduler/gscheduler/gscheduler.o 00:03:05.278 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:05.278 CC module/accel/error/accel_error.o 00:03:05.278 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:05.278 CC module/sock/posix/posix.o 00:03:05.278 LIB libspdk_env_dpdk_rpc.a 00:03:05.278 CC module/accel/error/accel_error_rpc.o 00:03:05.278 LIB libspdk_scheduler_gscheduler.a 00:03:05.278 LIB libspdk_scheduler_dpdk_governor.a 00:03:05.278 CC module/accel/iaa/accel_iaa_rpc.o 00:03:05.278 CC module/accel/ioat/accel_ioat_rpc.o 00:03:05.278 CC module/accel/dsa/accel_dsa_rpc.o 00:03:05.537 LIB libspdk_scheduler_dynamic.a 00:03:05.537 LIB libspdk_accel_error.a 00:03:05.537 LIB libspdk_accel_ioat.a 00:03:05.537 LIB libspdk_blob_bdev.a 00:03:05.537 LIB libspdk_accel_iaa.a 00:03:05.537 LIB libspdk_accel_dsa.a 00:03:05.537 CC module/blobfs/bdev/blobfs_bdev.o 00:03:05.537 CC module/bdev/lvol/vbdev_lvol.o 00:03:05.537 CC module/bdev/gpt/gpt.o 00:03:05.537 CC module/bdev/delay/vbdev_delay.o 00:03:05.537 CC module/bdev/error/vbdev_error.o 00:03:05.796 CC module/bdev/nvme/bdev_nvme.o 00:03:05.796 CC module/bdev/malloc/bdev_malloc.o 00:03:05.796 CC module/bdev/null/bdev_null.o 00:03:05.796 CC module/bdev/passthru/vbdev_passthru.o 00:03:05.796 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:05.796 CC module/bdev/gpt/vbdev_gpt.o 00:03:06.055 CC module/bdev/null/bdev_null_rpc.o 00:03:06.055 CC module/bdev/error/vbdev_error_rpc.o 00:03:06.055 LIB libspdk_sock_posix.a 00:03:06.055 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:06.055 LIB libspdk_blobfs_bdev.a 00:03:06.055 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:06.055 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:06.055 CC module/bdev/nvme/nvme_rpc.o 00:03:06.055 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:06.055 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:06.055 LIB libspdk_bdev_null.a 00:03:06.055 LIB libspdk_bdev_gpt.a 00:03:06.055 LIB libspdk_bdev_error.a 00:03:06.055 LIB libspdk_bdev_passthru.a 00:03:06.055 CC module/bdev/nvme/bdev_mdns_client.o 00:03:06.313 LIB libspdk_bdev_delay.a 00:03:06.313 CC module/bdev/raid/bdev_raid.o 00:03:06.313 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:06.313 CC module/bdev/split/vbdev_split.o 00:03:06.313 CC module/bdev/split/vbdev_split_rpc.o 00:03:06.313 LIB libspdk_bdev_malloc.a 00:03:06.313 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:06.313 CC module/bdev/aio/bdev_aio.o 00:03:06.313 CC module/bdev/ftl/bdev_ftl.o 00:03:06.313 LIB libspdk_bdev_lvol.a 00:03:06.571 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:06.571 CC module/bdev/iscsi/bdev_iscsi.o 00:03:06.571 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:06.571 LIB libspdk_bdev_split.a 00:03:06.571 LIB libspdk_bdev_zone_block.a 00:03:06.571 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:06.571 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:06.571 CC module/bdev/nvme/vbdev_opal.o 00:03:06.857 CC module/bdev/aio/bdev_aio_rpc.o 00:03:06.857 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:06.857 LIB libspdk_bdev_ftl.a 00:03:06.857 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:06.857 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:06.857 CC module/bdev/raid/bdev_raid_rpc.o 00:03:06.857 CC module/bdev/raid/bdev_raid_sb.o 00:03:06.857 LIB libspdk_bdev_aio.a 00:03:06.857 LIB libspdk_bdev_iscsi.a 00:03:06.857 CC module/bdev/raid/raid0.o 00:03:06.857 CC module/bdev/raid/raid1.o 00:03:06.857 CC module/bdev/raid/concat.o 00:03:07.116 CC module/bdev/raid/raid5f.o 00:03:07.116 LIB libspdk_bdev_virtio.a 00:03:07.374 LIB libspdk_bdev_raid.a 00:03:07.941 LIB libspdk_bdev_nvme.a 00:03:08.200 CC module/event/subsystems/iobuf/iobuf.o 00:03:08.200 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:08.200 CC module/event/subsystems/scheduler/scheduler.o 00:03:08.200 CC module/event/subsystems/vmd/vmd.o 00:03:08.200 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:08.200 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:08.200 CC module/event/subsystems/sock/sock.o 00:03:08.459 LIB libspdk_event_scheduler.a 00:03:08.459 LIB libspdk_event_sock.a 00:03:08.459 LIB libspdk_event_vhost_blk.a 00:03:08.459 LIB libspdk_event_iobuf.a 00:03:08.459 LIB libspdk_event_vmd.a 00:03:08.459 CC module/event/subsystems/accel/accel.o 00:03:08.718 LIB libspdk_event_accel.a 00:03:08.718 CC module/event/subsystems/bdev/bdev.o 00:03:08.976 LIB libspdk_event_bdev.a 00:03:09.235 CC module/event/subsystems/scsi/scsi.o 00:03:09.235 CC module/event/subsystems/nbd/nbd.o 00:03:09.235 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:09.235 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:09.235 LIB libspdk_event_nbd.a 00:03:09.235 LIB libspdk_event_scsi.a 00:03:09.494 LIB libspdk_event_nvmf.a 00:03:09.494 CC module/event/subsystems/iscsi/iscsi.o 00:03:09.494 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:09.753 LIB libspdk_event_vhost_scsi.a 00:03:09.753 LIB libspdk_event_iscsi.a 00:03:09.753 TEST_HEADER include/spdk/accel.h 00:03:09.753 TEST_HEADER include/spdk/accel_module.h 00:03:09.753 CXX app/trace/trace.o 00:03:09.753 TEST_HEADER include/spdk/assert.h 00:03:09.753 TEST_HEADER include/spdk/barrier.h 00:03:09.753 TEST_HEADER include/spdk/base64.h 00:03:09.753 TEST_HEADER include/spdk/bdev.h 00:03:09.753 TEST_HEADER include/spdk/bdev_module.h 00:03:09.753 TEST_HEADER include/spdk/bdev_zone.h 00:03:09.753 TEST_HEADER include/spdk/bit_array.h 00:03:09.753 TEST_HEADER include/spdk/bit_pool.h 00:03:09.753 TEST_HEADER include/spdk/blob.h 00:03:09.753 TEST_HEADER include/spdk/blob_bdev.h 00:03:09.753 TEST_HEADER include/spdk/blobfs.h 00:03:09.753 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:09.753 TEST_HEADER include/spdk/conf.h 00:03:09.753 TEST_HEADER include/spdk/config.h 00:03:09.753 TEST_HEADER include/spdk/cpuset.h 00:03:09.753 TEST_HEADER include/spdk/crc16.h 00:03:09.753 TEST_HEADER include/spdk/crc32.h 00:03:09.753 CC examples/accel/perf/accel_perf.o 00:03:09.753 TEST_HEADER include/spdk/crc64.h 00:03:09.753 TEST_HEADER include/spdk/dif.h 00:03:09.753 TEST_HEADER include/spdk/dma.h 00:03:09.753 TEST_HEADER include/spdk/endian.h 00:03:09.753 TEST_HEADER include/spdk/env.h 00:03:09.753 TEST_HEADER include/spdk/env_dpdk.h 00:03:09.753 CC examples/bdev/hello_world/hello_bdev.o 00:03:09.753 TEST_HEADER include/spdk/event.h 00:03:09.753 TEST_HEADER include/spdk/fd.h 00:03:09.753 CC examples/blob/hello_world/hello_blob.o 00:03:09.753 CC test/bdev/bdevio/bdevio.o 00:03:09.753 CC test/accel/dif/dif.o 00:03:09.753 CC test/blobfs/mkfs/mkfs.o 00:03:09.753 TEST_HEADER include/spdk/fd_group.h 00:03:09.753 TEST_HEADER include/spdk/file.h 00:03:09.753 CC test/dma/test_dma/test_dma.o 00:03:09.753 TEST_HEADER include/spdk/ftl.h 00:03:09.753 CC test/app/bdev_svc/bdev_svc.o 00:03:10.012 TEST_HEADER include/spdk/gpt_spec.h 00:03:10.012 TEST_HEADER include/spdk/hexlify.h 00:03:10.012 TEST_HEADER include/spdk/histogram_data.h 00:03:10.012 TEST_HEADER include/spdk/idxd.h 00:03:10.012 TEST_HEADER include/spdk/idxd_spec.h 00:03:10.012 TEST_HEADER include/spdk/init.h 00:03:10.012 TEST_HEADER include/spdk/ioat.h 00:03:10.012 TEST_HEADER include/spdk/ioat_spec.h 00:03:10.012 TEST_HEADER include/spdk/iscsi_spec.h 00:03:10.012 TEST_HEADER include/spdk/json.h 00:03:10.012 TEST_HEADER include/spdk/jsonrpc.h 00:03:10.012 TEST_HEADER include/spdk/likely.h 00:03:10.012 TEST_HEADER include/spdk/log.h 00:03:10.012 TEST_HEADER include/spdk/lvol.h 00:03:10.012 TEST_HEADER include/spdk/memory.h 00:03:10.012 TEST_HEADER include/spdk/mmio.h 00:03:10.012 TEST_HEADER include/spdk/nbd.h 00:03:10.012 TEST_HEADER include/spdk/notify.h 00:03:10.012 TEST_HEADER include/spdk/nvme.h 00:03:10.012 TEST_HEADER include/spdk/nvme_intel.h 00:03:10.012 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:10.012 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:10.012 TEST_HEADER include/spdk/nvme_spec.h 00:03:10.012 TEST_HEADER include/spdk/nvme_zns.h 00:03:10.012 TEST_HEADER include/spdk/nvmf.h 00:03:10.012 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:10.012 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:10.012 TEST_HEADER include/spdk/nvmf_spec.h 00:03:10.012 TEST_HEADER include/spdk/nvmf_transport.h 00:03:10.012 TEST_HEADER include/spdk/opal.h 00:03:10.012 TEST_HEADER include/spdk/opal_spec.h 00:03:10.012 TEST_HEADER include/spdk/pci_ids.h 00:03:10.012 TEST_HEADER include/spdk/pipe.h 00:03:10.012 TEST_HEADER include/spdk/queue.h 00:03:10.012 TEST_HEADER include/spdk/reduce.h 00:03:10.012 TEST_HEADER include/spdk/rpc.h 00:03:10.012 TEST_HEADER include/spdk/scheduler.h 00:03:10.012 TEST_HEADER include/spdk/scsi.h 00:03:10.012 TEST_HEADER include/spdk/scsi_spec.h 00:03:10.012 TEST_HEADER include/spdk/sock.h 00:03:10.012 TEST_HEADER include/spdk/stdinc.h 00:03:10.012 TEST_HEADER include/spdk/string.h 00:03:10.012 TEST_HEADER include/spdk/thread.h 00:03:10.012 TEST_HEADER include/spdk/trace.h 00:03:10.012 TEST_HEADER include/spdk/trace_parser.h 00:03:10.012 TEST_HEADER include/spdk/tree.h 00:03:10.012 TEST_HEADER include/spdk/ublk.h 00:03:10.012 TEST_HEADER include/spdk/util.h 00:03:10.012 TEST_HEADER include/spdk/uuid.h 00:03:10.012 TEST_HEADER include/spdk/version.h 00:03:10.012 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:10.012 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:10.012 TEST_HEADER include/spdk/vhost.h 00:03:10.012 TEST_HEADER include/spdk/vmd.h 00:03:10.012 TEST_HEADER include/spdk/xor.h 00:03:10.012 TEST_HEADER include/spdk/zipf.h 00:03:10.012 CXX test/cpp_headers/accel.o 00:03:10.012 LINK bdev_svc 00:03:10.012 LINK hello_bdev 00:03:10.012 LINK mkfs 00:03:10.272 LINK hello_blob 00:03:10.272 CXX test/cpp_headers/accel_module.o 00:03:10.272 LINK spdk_trace 00:03:10.272 LINK test_dma 00:03:10.272 CXX test/cpp_headers/assert.o 00:03:10.272 LINK bdevio 00:03:10.272 LINK dif 00:03:10.532 LINK accel_perf 00:03:10.532 CXX test/cpp_headers/barrier.o 00:03:10.532 CXX test/cpp_headers/base64.o 00:03:10.791 CC app/trace_record/trace_record.o 00:03:10.791 CXX test/cpp_headers/bdev.o 00:03:11.050 LINK spdk_trace_record 00:03:11.050 CXX test/cpp_headers/bdev_module.o 00:03:11.050 CXX test/cpp_headers/bdev_zone.o 00:03:11.310 CC app/nvmf_tgt/nvmf_main.o 00:03:11.310 CXX test/cpp_headers/bit_array.o 00:03:11.310 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:11.569 CXX test/cpp_headers/bit_pool.o 00:03:11.569 LINK nvmf_tgt 00:03:11.569 CXX test/cpp_headers/blob.o 00:03:11.828 CXX test/cpp_headers/blob_bdev.o 00:03:11.828 LINK nvme_fuzz 00:03:12.088 CXX test/cpp_headers/blobfs.o 00:03:12.088 CXX test/cpp_headers/blobfs_bdev.o 00:03:12.347 CXX test/cpp_headers/conf.o 00:03:12.347 CC app/iscsi_tgt/iscsi_tgt.o 00:03:12.606 CXX test/cpp_headers/config.o 00:03:12.606 LINK iscsi_tgt 00:03:12.606 CXX test/cpp_headers/cpuset.o 00:03:12.606 CXX test/cpp_headers/crc16.o 00:03:12.866 CXX test/cpp_headers/crc32.o 00:03:12.866 CC examples/bdev/bdevperf/bdevperf.o 00:03:13.139 CXX test/cpp_headers/crc64.o 00:03:13.139 CC app/spdk_tgt/spdk_tgt.o 00:03:13.139 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:13.139 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:13.139 CC app/spdk_lspci/spdk_lspci.o 00:03:13.139 CC examples/blob/cli/blobcli.o 00:03:13.139 CC app/spdk_nvme_perf/perf.o 00:03:13.139 CXX test/cpp_headers/dif.o 00:03:13.139 LINK spdk_tgt 00:03:13.139 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:13.419 LINK spdk_lspci 00:03:13.419 CXX test/cpp_headers/dma.o 00:03:13.419 CXX test/cpp_headers/endian.o 00:03:13.678 CXX test/cpp_headers/env.o 00:03:13.678 LINK blobcli 00:03:13.678 LINK vhost_fuzz 00:03:13.678 LINK bdevperf 00:03:13.937 CXX test/cpp_headers/env_dpdk.o 00:03:13.937 CXX test/cpp_headers/event.o 00:03:13.937 LINK spdk_nvme_perf 00:03:14.196 CXX test/cpp_headers/fd.o 00:03:14.196 CXX test/cpp_headers/fd_group.o 00:03:14.196 CXX test/cpp_headers/file.o 00:03:14.456 CXX test/cpp_headers/ftl.o 00:03:14.456 CXX test/cpp_headers/gpt_spec.o 00:03:14.456 CXX test/cpp_headers/hexlify.o 00:03:14.456 CC test/env/mem_callbacks/mem_callbacks.o 00:03:14.714 CC app/spdk_nvme_identify/identify.o 00:03:14.714 CXX test/cpp_headers/histogram_data.o 00:03:14.714 CC app/spdk_nvme_discover/discovery_aer.o 00:03:14.714 CXX test/cpp_headers/idxd.o 00:03:14.973 CC app/spdk_top/spdk_top.o 00:03:14.973 CXX test/cpp_headers/idxd_spec.o 00:03:14.973 LINK spdk_nvme_discover 00:03:14.973 LINK mem_callbacks 00:03:14.973 CXX test/cpp_headers/init.o 00:03:14.973 LINK iscsi_fuzz 00:03:15.232 CXX test/cpp_headers/ioat.o 00:03:15.232 CC app/vhost/vhost.o 00:03:15.232 CXX test/cpp_headers/ioat_spec.o 00:03:15.491 LINK vhost 00:03:15.491 CC test/env/vtophys/vtophys.o 00:03:15.491 CXX test/cpp_headers/iscsi_spec.o 00:03:15.491 LINK vtophys 00:03:15.491 LINK spdk_nvme_identify 00:03:15.750 CXX test/cpp_headers/json.o 00:03:15.750 CXX test/cpp_headers/jsonrpc.o 00:03:15.750 LINK spdk_top 00:03:16.008 CXX test/cpp_headers/likely.o 00:03:16.008 CC test/event/event_perf/event_perf.o 00:03:16.266 CC test/app/histogram_perf/histogram_perf.o 00:03:16.266 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:16.266 CXX test/cpp_headers/log.o 00:03:16.266 LINK event_perf 00:03:16.267 CXX test/cpp_headers/lvol.o 00:03:16.267 LINK histogram_perf 00:03:16.267 LINK env_dpdk_post_init 00:03:16.525 CC test/lvol/esnap/esnap.o 00:03:16.525 CC examples/ioat/perf/perf.o 00:03:16.525 CC test/nvme/aer/aer.o 00:03:16.783 CXX test/cpp_headers/memory.o 00:03:16.783 CXX test/cpp_headers/mmio.o 00:03:17.042 LINK aer 00:03:17.042 CC test/rpc_client/rpc_client_test.o 00:03:17.042 LINK ioat_perf 00:03:17.042 CC test/app/jsoncat/jsoncat.o 00:03:17.042 CC test/event/reactor/reactor.o 00:03:17.042 CXX test/cpp_headers/nbd.o 00:03:17.300 CXX test/cpp_headers/notify.o 00:03:17.300 CC test/thread/poller_perf/poller_perf.o 00:03:17.300 LINK rpc_client_test 00:03:17.300 LINK reactor 00:03:17.300 LINK jsoncat 00:03:17.300 CXX test/cpp_headers/nvme.o 00:03:17.300 CC test/env/memory/memory_ut.o 00:03:17.300 LINK poller_perf 00:03:17.558 CXX test/cpp_headers/nvme_intel.o 00:03:17.558 CC examples/ioat/verify/verify.o 00:03:17.558 CXX test/cpp_headers/nvme_ocssd.o 00:03:17.817 CC test/thread/lock/spdk_lock.o 00:03:17.817 LINK verify 00:03:17.817 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:18.075 CC test/nvme/reset/reset.o 00:03:18.075 CC test/app/stub/stub.o 00:03:18.075 CXX test/cpp_headers/nvme_spec.o 00:03:18.075 CC test/event/reactor_perf/reactor_perf.o 00:03:18.075 CXX test/cpp_headers/nvme_zns.o 00:03:18.075 LINK memory_ut 00:03:18.075 LINK stub 00:03:18.075 LINK reactor_perf 00:03:18.334 LINK reset 00:03:18.334 CXX test/cpp_headers/nvmf.o 00:03:18.334 CC app/spdk_dd/spdk_dd.o 00:03:18.334 CC test/env/pci/pci_ut.o 00:03:18.334 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:18.334 CC examples/nvme/hello_world/hello_world.o 00:03:18.334 CXX test/cpp_headers/nvmf_cmd.o 00:03:18.593 LINK histogram_ut 00:03:18.593 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:18.593 LINK hello_world 00:03:18.593 LINK spdk_dd 00:03:18.852 LINK pci_ut 00:03:18.852 CXX test/cpp_headers/nvmf_spec.o 00:03:18.852 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:18.852 CC test/event/app_repeat/app_repeat.o 00:03:18.852 CXX test/cpp_headers/nvmf_transport.o 00:03:19.110 CXX test/cpp_headers/opal.o 00:03:19.110 CC test/event/scheduler/scheduler.o 00:03:19.110 LINK app_repeat 00:03:19.110 CC test/nvme/sgl/sgl.o 00:03:19.369 CC test/nvme/e2edp/nvme_dp.o 00:03:19.369 CXX test/cpp_headers/opal_spec.o 00:03:19.369 LINK scheduler 00:03:19.369 LINK sgl 00:03:19.369 LINK spdk_lock 00:03:19.627 CXX test/cpp_headers/pci_ids.o 00:03:19.627 LINK nvme_dp 00:03:19.627 CC examples/nvme/reconnect/reconnect.o 00:03:19.627 CXX test/cpp_headers/pipe.o 00:03:19.886 CXX test/cpp_headers/queue.o 00:03:19.886 CXX test/cpp_headers/reduce.o 00:03:19.886 LINK reconnect 00:03:19.886 CXX test/cpp_headers/rpc.o 00:03:20.145 CXX test/cpp_headers/scheduler.o 00:03:20.145 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:20.145 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:03:20.145 CXX test/cpp_headers/scsi.o 00:03:20.404 CXX test/cpp_headers/scsi_spec.o 00:03:20.404 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:20.663 CXX test/cpp_headers/sock.o 00:03:20.663 CC test/nvme/overhead/overhead.o 00:03:20.663 LINK tree_ut 00:03:20.663 CXX test/cpp_headers/stdinc.o 00:03:20.923 LINK blob_bdev_ut 00:03:20.923 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:20.923 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:20.923 CXX test/cpp_headers/string.o 00:03:20.923 LINK overhead 00:03:20.923 CC app/fio/nvme/fio_plugin.o 00:03:21.181 LINK accel_ut 00:03:21.181 CC test/unit/lib/blob/blob.c/blob_ut.o 00:03:21.181 CXX test/cpp_headers/thread.o 00:03:21.181 CXX test/cpp_headers/trace.o 00:03:21.440 CXX test/cpp_headers/trace_parser.o 00:03:21.440 LINK nvme_manage 00:03:21.440 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:21.440 LINK esnap 00:03:21.440 CXX test/cpp_headers/tree.o 00:03:21.440 LINK spdk_nvme 00:03:21.698 CXX test/cpp_headers/ublk.o 00:03:21.698 CXX test/cpp_headers/util.o 00:03:21.957 CXX test/cpp_headers/uuid.o 00:03:21.957 CC test/nvme/err_injection/err_injection.o 00:03:21.957 CXX test/cpp_headers/version.o 00:03:21.957 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:21.957 CXX test/cpp_headers/vfio_user_pci.o 00:03:22.216 LINK blobfs_async_ut 00:03:22.216 LINK err_injection 00:03:22.216 CXX test/cpp_headers/vfio_user_spec.o 00:03:22.216 LINK blobfs_bdev_ut 00:03:22.216 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:22.475 CXX test/cpp_headers/vhost.o 00:03:22.475 CC examples/nvme/arbitration/arbitration.o 00:03:22.475 CC test/unit/lib/event/app.c/app_ut.o 00:03:22.475 CXX test/cpp_headers/vmd.o 00:03:22.475 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:22.734 LINK dma_ut 00:03:22.734 LINK blobfs_sync_ut 00:03:22.734 CC app/fio/bdev/fio_plugin.o 00:03:22.734 LINK arbitration 00:03:22.734 CXX test/cpp_headers/xor.o 00:03:23.014 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:23.014 CXX test/cpp_headers/zipf.o 00:03:23.014 LINK ioat_ut 00:03:23.014 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:23.014 LINK app_ut 00:03:23.014 CC examples/nvme/hotplug/hotplug.o 00:03:23.271 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:23.271 CC test/nvme/startup/startup.o 00:03:23.271 LINK spdk_bdev 00:03:23.271 LINK cmb_copy 00:03:23.271 LINK hotplug 00:03:23.271 LINK startup 00:03:23.271 LINK init_grp_ut 00:03:23.529 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:23.529 CC examples/nvme/abort/abort.o 00:03:23.787 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:24.046 LINK pmr_persistence 00:03:24.046 LINK abort 00:03:24.046 LINK conn_ut 00:03:24.303 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:24.303 LINK reactor_ut 00:03:24.303 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:24.571 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:24.571 CC test/nvme/reserve/reserve.o 00:03:24.571 CC test/nvme/simple_copy/simple_copy.o 00:03:24.837 LINK reserve 00:03:24.837 LINK param_ut 00:03:24.837 LINK simple_copy 00:03:24.837 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:25.095 CC examples/sock/hello_world/hello_sock.o 00:03:25.095 LINK portal_grp_ut 00:03:25.095 CC examples/vmd/lsvmd/lsvmd.o 00:03:25.352 LINK lsvmd 00:03:25.352 LINK hello_sock 00:03:25.352 LINK bdev_ut 00:03:25.352 CC examples/vmd/led/led.o 00:03:25.611 LINK led 00:03:25.611 LINK tgt_node_ut 00:03:25.869 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:25.869 CC test/nvme/connect_stress/connect_stress.o 00:03:25.869 CC test/nvme/boot_partition/boot_partition.o 00:03:25.869 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:25.869 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:26.127 LINK connect_stress 00:03:26.127 LINK boot_partition 00:03:26.127 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:26.127 CC examples/nvmf/nvmf/nvmf.o 00:03:26.385 LINK json_util_ut 00:03:26.385 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:26.385 LINK nvmf 00:03:26.643 LINK iscsi_ut 00:03:26.643 CC test/unit/lib/log/log.c/log_ut.o 00:03:26.902 LINK jsonrpc_server_ut 00:03:26.902 LINK json_write_ut 00:03:26.902 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:26.902 LINK log_ut 00:03:26.902 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:27.161 CC examples/util/zipf/zipf.o 00:03:27.161 CC test/nvme/compliance/nvme_compliance.o 00:03:27.161 CC examples/thread/thread/thread_ex.o 00:03:27.161 CC test/nvme/fused_ordering/fused_ordering.o 00:03:27.420 LINK fused_ordering 00:03:27.420 LINK zipf 00:03:27.420 LINK notify_ut 00:03:27.420 LINK thread 00:03:27.420 LINK nvme_compliance 00:03:27.679 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:27.938 LINK doorbell_aers 00:03:27.938 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:28.196 LINK json_parse_ut 00:03:28.455 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:28.455 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:03:28.455 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:03:28.714 LINK lvol_ut 00:03:28.714 CC test/nvme/fdp/fdp.o 00:03:28.714 LINK blob_ut 00:03:28.972 CC test/nvme/cuse/cuse.o 00:03:28.972 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:29.231 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:03:29.231 LINK part_ut 00:03:29.490 LINK nvme_ut 00:03:29.490 LINK fdp 00:03:29.490 LINK nvme_ctrlr_cmd_ut 00:03:29.490 LINK nvme_ctrlr_ocssd_cmd_ut 00:03:29.749 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:29.749 LINK cuse 00:03:29.749 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:29.749 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:29.749 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:03:30.008 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:03:30.008 LINK scsi_nvme_ut 00:03:30.008 LINK nvme_ns_ut 00:03:30.008 LINK dev_ut 00:03:30.265 CC examples/idxd/perf/perf.o 00:03:30.265 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:03:30.265 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:30.265 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:03:30.265 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:03:30.523 LINK lun_ut 00:03:30.523 LINK idxd_perf 00:03:30.782 LINK gpt_ut 00:03:30.782 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:30.782 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:30.782 LINK scsi_ut 00:03:31.042 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:31.042 LINK nvme_poll_group_ut 00:03:31.042 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:31.042 LINK nvme_ctrlr_ut 00:03:31.301 LINK nvme_ns_ocssd_cmd_ut 00:03:31.301 LINK interrupt_tgt 00:03:31.301 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:03:31.301 LINK nvme_qpair_ut 00:03:31.560 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:31.560 CC test/unit/lib/thread/thread.c/thread_ut.o 00:03:31.560 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:03:31.560 LINK nvme_ns_cmd_ut 00:03:31.841 LINK nvme_quirks_ut 00:03:31.841 LINK nvme_pcie_ut 00:03:31.841 CC test/unit/lib/util/base64.c/base64_ut.o 00:03:31.841 LINK vbdev_lvol_ut 00:03:32.109 LINK scsi_bdev_ut 00:03:32.109 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:03:32.109 LINK iobuf_ut 00:03:32.109 LINK base64_ut 00:03:32.109 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:03:32.368 LINK pci_event_ut 00:03:32.368 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:32.368 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:32.368 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:03:32.368 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:32.627 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:32.627 LINK tcp_ut 00:03:32.886 LINK scsi_pr_ut 00:03:32.886 LINK bit_array_ut 00:03:32.886 LINK bdev_zone_ut 00:03:32.886 LINK sock_ut 00:03:32.886 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:32.886 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:03:33.144 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:33.144 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:33.144 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:33.144 CC test/unit/lib/sock/posix.c/posix_ut.o 00:03:33.402 LINK cpuset_ut 00:03:33.402 LINK thread_ut 00:03:33.402 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:03:33.662 LINK crc16_ut 00:03:33.662 LINK vbdev_zone_block_ut 00:03:33.662 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:33.920 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:03:33.920 LINK posix_ut 00:03:33.920 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:03:34.185 LINK crc32_ieee_ut 00:03:34.185 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:03:34.185 LINK bdev_raid_ut 00:03:34.185 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:03:34.447 LINK crc32c_ut 00:03:34.448 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:34.448 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:03:34.714 LINK crc64_ut 00:03:34.714 LINK nvme_transport_ut 00:03:34.714 LINK nvme_io_msg_ut 00:03:34.714 LINK nvme_tcp_ut 00:03:34.975 CC test/unit/lib/util/dif.c/dif_ut.o 00:03:34.975 LINK bdev_raid_sb_ut 00:03:34.975 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:34.975 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:35.234 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:03:35.234 LINK subsystem_ut 00:03:35.234 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:35.234 LINK ctrlr_discovery_ut 00:03:35.493 LINK concat_ut 00:03:35.493 LINK raid1_ut 00:03:35.493 CC test/unit/lib/util/iov.c/iov_ut.o 00:03:35.493 LINK ctrlr_ut 00:03:35.753 CC test/unit/lib/util/math.c/math_ut.o 00:03:35.753 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:03:35.753 LINK bdev_ut 00:03:35.753 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:03:35.753 LINK math_ut 00:03:35.753 LINK iov_ut 00:03:36.012 CC test/unit/lib/util/string.c/string_ut.o 00:03:36.012 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:03:36.012 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:03:36.012 LINK dif_ut 00:03:36.012 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:03:36.012 LINK ctrlr_bdev_ut 00:03:36.012 LINK pipe_ut 00:03:36.271 LINK string_ut 00:03:36.271 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:03:36.271 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:36.271 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:36.271 CC test/unit/lib/util/xor.c/xor_ut.o 00:03:36.530 LINK nvme_pcie_common_ut 00:03:36.530 LINK xor_ut 00:03:36.789 LINK raid5f_ut 00:03:36.789 LINK nvme_fabric_ut 00:03:36.789 LINK nvme_opal_ut 00:03:36.789 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:36.789 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:03:37.048 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:03:37.048 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:03:37.048 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:03:37.307 LINK subsystem_ut 00:03:37.307 LINK rpc_ut 00:03:37.307 LINK idxd_user_ut 00:03:37.307 LINK nvmf_ut 00:03:37.566 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:03:37.566 CC test/unit/lib/rdma/common.c/common_ut.o 00:03:37.566 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:03:37.566 LINK nvme_cuse_ut 00:03:37.566 LINK bdev_nvme_ut 00:03:37.566 LINK idxd_ut 00:03:37.566 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:03:37.825 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:03:37.825 LINK ftl_l2p_ut 00:03:37.825 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:03:38.084 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:03:38.084 LINK common_ut 00:03:38.084 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:03:38.084 LINK ftl_bitmap_ut 00:03:38.084 LINK nvme_rdma_ut 00:03:38.385 LINK ftl_io_ut 00:03:38.385 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:03:38.385 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:03:38.385 LINK ftl_mempool_ut 00:03:38.952 LINK ftl_mngt_ut 00:03:38.952 LINK ftl_band_ut 00:03:39.520 LINK vhost_ut 00:03:39.520 LINK ftl_layout_upgrade_ut 00:03:39.520 LINK rdma_ut 00:03:39.520 LINK ftl_sb_ut 00:03:39.778 LINK transport_ut 00:03:40.037 00:03:40.037 real 1m45.686s 00:03:40.037 user 9m15.481s 00:03:40.037 sys 1m51.663s 00:03:40.037 05:22:43 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:40.037 05:22:43 -- common/autotest_common.sh@10 -- $ set +x 00:03:40.037 ************************************ 00:03:40.037 END TEST unittest_build 00:03:40.037 ************************************ 00:03:40.037 05:22:43 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:40.037 05:22:43 -- nvmf/common.sh@7 -- # uname -s 00:03:40.037 05:22:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:40.037 05:22:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:40.037 05:22:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:40.037 05:22:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:40.037 05:22:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:40.037 05:22:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:40.037 05:22:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:40.037 05:22:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:40.037 05:22:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:40.037 05:22:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:40.037 05:22:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:507e7c9b-b16e-43a7-aa23-60e673fd02e7 00:03:40.037 05:22:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=507e7c9b-b16e-43a7-aa23-60e673fd02e7 00:03:40.037 05:22:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:40.037 05:22:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:40.037 05:22:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:40.037 05:22:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:40.037 05:22:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:40.037 05:22:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:40.037 05:22:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:40.037 05:22:44 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:40.037 05:22:44 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:40.037 05:22:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:40.037 05:22:44 -- paths/export.sh@5 -- # export PATH 00:03:40.037 05:22:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:40.037 05:22:44 -- nvmf/common.sh@46 -- # : 0 00:03:40.037 05:22:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:40.037 05:22:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:40.037 05:22:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:40.037 05:22:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:40.037 05:22:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:40.037 05:22:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:40.296 05:22:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:40.296 05:22:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:40.296 05:22:44 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:40.296 05:22:44 -- spdk/autotest.sh@32 -- # uname -s 00:03:40.296 05:22:44 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:40.296 05:22:44 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:03:40.296 05:22:44 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:40.296 05:22:44 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:40.296 05:22:44 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:40.296 05:22:44 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:40.296 05:22:44 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:40.296 05:22:44 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:03:40.296 05:22:44 -- spdk/autotest.sh@48 -- # udevadm_pid=92435 00:03:40.296 05:22:44 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:03:40.296 05:22:44 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:40.296 05:22:44 -- spdk/autotest.sh@54 -- # echo 92446 00:03:40.296 05:22:44 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:40.296 05:22:44 -- spdk/autotest.sh@56 -- # echo 92447 00:03:40.296 05:22:44 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:40.296 05:22:44 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:40.296 05:22:44 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:40.296 05:22:44 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:40.296 05:22:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:40.296 05:22:44 -- common/autotest_common.sh@10 -- # set +x 00:03:40.296 05:22:44 -- spdk/autotest.sh@70 -- # create_test_list 00:03:40.296 05:22:44 -- common/autotest_common.sh@736 -- # xtrace_disable 00:03:40.296 05:22:44 -- common/autotest_common.sh@10 -- # set +x 00:03:40.296 05:22:44 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:40.296 05:22:44 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:40.296 05:22:44 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:40.296 05:22:44 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:40.296 05:22:44 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:40.296 05:22:44 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:40.296 05:22:44 -- common/autotest_common.sh@1440 -- # uname 00:03:40.297 05:22:44 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:03:40.297 05:22:44 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:40.297 05:22:44 -- common/autotest_common.sh@1460 -- # uname 00:03:40.297 05:22:44 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:03:40.297 05:22:44 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:03:40.297 05:22:44 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:03:40.297 05:22:44 -- spdk/autotest.sh@83 -- # hash lcov 00:03:40.297 05:22:44 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:40.297 05:22:44 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:03:40.297 --rc lcov_branch_coverage=1 00:03:40.297 --rc lcov_function_coverage=1 00:03:40.297 --rc genhtml_branch_coverage=1 00:03:40.297 --rc genhtml_function_coverage=1 00:03:40.297 --rc genhtml_legend=1 00:03:40.297 --rc geninfo_all_blocks=1 00:03:40.297 ' 00:03:40.297 05:22:44 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:03:40.297 --rc lcov_branch_coverage=1 00:03:40.297 --rc lcov_function_coverage=1 00:03:40.297 --rc genhtml_branch_coverage=1 00:03:40.297 --rc genhtml_function_coverage=1 00:03:40.297 --rc genhtml_legend=1 00:03:40.297 --rc geninfo_all_blocks=1 00:03:40.297 ' 00:03:40.297 05:22:44 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:03:40.297 --rc lcov_branch_coverage=1 00:03:40.297 --rc lcov_function_coverage=1 00:03:40.297 --rc genhtml_branch_coverage=1 00:03:40.297 --rc genhtml_function_coverage=1 00:03:40.297 --rc genhtml_legend=1 00:03:40.297 --rc geninfo_all_blocks=1 00:03:40.297 --no-external' 00:03:40.297 05:22:44 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:03:40.297 --rc lcov_branch_coverage=1 00:03:40.297 --rc lcov_function_coverage=1 00:03:40.297 --rc genhtml_branch_coverage=1 00:03:40.297 --rc genhtml_function_coverage=1 00:03:40.297 --rc genhtml_legend=1 00:03:40.297 --rc geninfo_all_blocks=1 00:03:40.297 --no-external' 00:03:40.297 05:22:44 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:40.297 lcov: LCOV version 1.15 00:03:40.297 05:22:44 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:58.383 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:58.383 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:58.383 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:58.383 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:58.383 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:58.383 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:20.313 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:20.313 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:20.313 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:20.313 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:20.313 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:20.314 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:20.314 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:20.315 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:20.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:21.689 05:23:25 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:04:21.689 05:23:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:21.689 05:23:25 -- common/autotest_common.sh@10 -- # set +x 00:04:21.689 05:23:25 -- spdk/autotest.sh@102 -- # rm -f 00:04:21.689 05:23:25 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:21.948 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:21.948 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:21.948 05:23:25 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:04:21.948 05:23:25 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:21.948 05:23:25 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:21.948 05:23:25 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:21.948 05:23:25 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:21.948 05:23:25 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:21.948 05:23:25 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:21.948 05:23:25 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:21.948 05:23:25 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:21.948 05:23:25 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:04:21.948 05:23:25 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:04:21.948 05:23:25 -- spdk/autotest.sh@121 -- # grep -v p 00:04:21.948 05:23:25 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:21.948 05:23:25 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:21.948 05:23:25 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:04:21.948 05:23:25 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:21.948 05:23:25 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:21.948 No valid GPT data, bailing 00:04:21.948 05:23:25 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:21.948 05:23:25 -- scripts/common.sh@393 -- # pt= 00:04:21.948 05:23:25 -- scripts/common.sh@394 -- # return 1 00:04:21.948 05:23:25 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:21.948 1+0 records in 00:04:21.948 1+0 records out 00:04:21.948 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.003905 s, 269 MB/s 00:04:21.948 05:23:25 -- spdk/autotest.sh@129 -- # sync 00:04:21.948 05:23:25 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:21.948 05:23:25 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:21.948 05:23:25 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:23.325 05:23:27 -- spdk/autotest.sh@135 -- # uname -s 00:04:23.325 05:23:27 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:04:23.325 05:23:27 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:23.325 05:23:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:23.325 05:23:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:23.325 05:23:27 -- common/autotest_common.sh@10 -- # set +x 00:04:23.584 ************************************ 00:04:23.584 START TEST setup.sh 00:04:23.584 ************************************ 00:04:23.584 05:23:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:23.584 * Looking for test storage... 00:04:23.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:23.584 05:23:27 -- setup/test-setup.sh@10 -- # uname -s 00:04:23.584 05:23:27 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:23.584 05:23:27 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:23.584 05:23:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:23.584 05:23:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:23.584 05:23:27 -- common/autotest_common.sh@10 -- # set +x 00:04:23.584 ************************************ 00:04:23.584 START TEST acl 00:04:23.584 ************************************ 00:04:23.584 05:23:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:23.584 * Looking for test storage... 00:04:23.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:23.584 05:23:27 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:23.584 05:23:27 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:23.584 05:23:27 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:23.584 05:23:27 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:23.584 05:23:27 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:23.584 05:23:27 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:23.584 05:23:27 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:23.584 05:23:27 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:23.584 05:23:27 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:23.584 05:23:27 -- setup/acl.sh@12 -- # devs=() 00:04:23.584 05:23:27 -- setup/acl.sh@12 -- # declare -a devs 00:04:23.584 05:23:27 -- setup/acl.sh@13 -- # drivers=() 00:04:23.584 05:23:27 -- setup/acl.sh@13 -- # declare -A drivers 00:04:23.584 05:23:27 -- setup/acl.sh@51 -- # setup reset 00:04:23.584 05:23:27 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:23.584 05:23:27 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:24.151 05:23:27 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:24.151 05:23:27 -- setup/acl.sh@16 -- # local dev driver 00:04:24.151 05:23:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.151 05:23:27 -- setup/acl.sh@15 -- # setup output status 00:04:24.151 05:23:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.151 05:23:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:24.151 Hugepages 00:04:24.151 node hugesize free / total 00:04:24.151 05:23:28 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:24.151 05:23:28 -- setup/acl.sh@19 -- # continue 00:04:24.151 05:23:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.151 00:04:24.151 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:24.151 05:23:28 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:24.151 05:23:28 -- setup/acl.sh@19 -- # continue 00:04:24.151 05:23:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.410 05:23:28 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:24.410 05:23:28 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:24.410 05:23:28 -- setup/acl.sh@20 -- # continue 00:04:24.410 05:23:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.410 05:23:28 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:24.410 05:23:28 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:24.410 05:23:28 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:24.410 05:23:28 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:24.410 05:23:28 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:24.410 05:23:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:24.410 05:23:28 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:24.410 05:23:28 -- setup/acl.sh@54 -- # run_test denied denied 00:04:24.410 05:23:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:24.410 05:23:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:24.410 05:23:28 -- common/autotest_common.sh@10 -- # set +x 00:04:24.410 ************************************ 00:04:24.410 START TEST denied 00:04:24.410 ************************************ 00:04:24.410 05:23:28 -- common/autotest_common.sh@1104 -- # denied 00:04:24.410 05:23:28 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:24.410 05:23:28 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:24.410 05:23:28 -- setup/acl.sh@38 -- # setup output config 00:04:24.410 05:23:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.410 05:23:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:25.785 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:25.785 05:23:29 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:25.785 05:23:29 -- setup/acl.sh@28 -- # local dev driver 00:04:25.785 05:23:29 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:25.785 05:23:29 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:25.785 05:23:29 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:25.785 05:23:29 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:25.785 05:23:29 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:25.785 05:23:29 -- setup/acl.sh@41 -- # setup reset 00:04:25.785 05:23:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:25.785 05:23:29 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:26.355 00:04:26.355 real 0m1.864s 00:04:26.355 user 0m0.522s 00:04:26.355 sys 0m1.392s 00:04:26.355 05:23:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.355 05:23:30 -- common/autotest_common.sh@10 -- # set +x 00:04:26.355 ************************************ 00:04:26.355 END TEST denied 00:04:26.355 ************************************ 00:04:26.355 05:23:30 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:26.355 05:23:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:26.355 05:23:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:26.355 05:23:30 -- common/autotest_common.sh@10 -- # set +x 00:04:26.355 ************************************ 00:04:26.355 START TEST allowed 00:04:26.355 ************************************ 00:04:26.355 05:23:30 -- common/autotest_common.sh@1104 -- # allowed 00:04:26.355 05:23:30 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:26.355 05:23:30 -- setup/acl.sh@45 -- # setup output config 00:04:26.355 05:23:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.355 05:23:30 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:26.355 05:23:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:28.257 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:28.257 05:23:31 -- setup/acl.sh@47 -- # verify 00:04:28.257 05:23:31 -- setup/acl.sh@28 -- # local dev driver 00:04:28.257 05:23:31 -- setup/acl.sh@48 -- # setup reset 00:04:28.257 05:23:31 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:28.257 05:23:31 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:28.515 00:04:28.515 real 0m2.155s 00:04:28.515 user 0m0.486s 00:04:28.515 sys 0m1.682s 00:04:28.515 05:23:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.515 05:23:32 -- common/autotest_common.sh@10 -- # set +x 00:04:28.515 ************************************ 00:04:28.515 END TEST allowed 00:04:28.515 ************************************ 00:04:28.515 ************************************ 00:04:28.515 END TEST acl 00:04:28.515 ************************************ 00:04:28.515 00:04:28.515 real 0m5.003s 00:04:28.515 user 0m1.601s 00:04:28.515 sys 0m3.521s 00:04:28.515 05:23:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.515 05:23:32 -- common/autotest_common.sh@10 -- # set +x 00:04:28.515 05:23:32 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:28.515 05:23:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:28.516 05:23:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:28.516 05:23:32 -- common/autotest_common.sh@10 -- # set +x 00:04:28.516 ************************************ 00:04:28.516 START TEST hugepages 00:04:28.516 ************************************ 00:04:28.516 05:23:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:28.775 * Looking for test storage... 00:04:28.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:28.775 05:23:32 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:28.775 05:23:32 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:28.775 05:23:32 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:28.775 05:23:32 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:28.775 05:23:32 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:28.775 05:23:32 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:28.775 05:23:32 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:28.775 05:23:32 -- setup/common.sh@18 -- # local node= 00:04:28.775 05:23:32 -- setup/common.sh@19 -- # local var val 00:04:28.775 05:23:32 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.775 05:23:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.775 05:23:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.775 05:23:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.775 05:23:32 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.775 05:23:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.775 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.775 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.775 05:23:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 2976312 kB' 'MemAvailable: 7404408 kB' 'Buffers: 35132 kB' 'Cached: 4531696 kB' 'SwapCached: 0 kB' 'Active: 998412 kB' 'Inactive: 3687328 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 129260 kB' 'Active(file): 997356 kB' 'Inactive(file): 3558068 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'AnonPages: 147672 kB' 'Mapped: 68264 kB' 'Shmem: 2600 kB' 'KReclaimable: 194188 kB' 'Slab: 257840 kB' 'SReclaimable: 194188 kB' 'SUnreclaim: 63652 kB' 'KernelStack: 4496 kB' 'PageTables: 3504 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4024332 kB' 'Committed_AS: 495136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:28.775 05:23:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.775 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.775 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.775 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.775 05:23:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.775 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.775 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.775 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.775 05:23:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.775 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.775 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.775 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.775 05:23:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.775 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.775 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.775 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.775 05:23:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.775 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.775 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.775 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.775 05:23:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.775 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.775 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.775 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.775 05:23:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.775 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.775 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.775 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.775 05:23:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.775 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.775 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.775 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.775 05:23:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.775 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.775 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.776 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.776 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.777 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.777 05:23:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.777 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.777 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.777 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.777 05:23:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.777 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.777 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.777 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.777 05:23:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.777 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.777 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.777 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.777 05:23:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.777 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.777 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.777 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.777 05:23:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.777 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.777 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.777 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.777 05:23:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.777 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.777 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.777 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.777 05:23:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.777 05:23:32 -- setup/common.sh@32 -- # continue 00:04:28.777 05:23:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.777 05:23:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.777 05:23:32 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.777 05:23:32 -- setup/common.sh@33 -- # echo 2048 00:04:28.777 05:23:32 -- setup/common.sh@33 -- # return 0 00:04:28.777 05:23:32 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:28.777 05:23:32 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:28.777 05:23:32 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:28.777 05:23:32 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:28.777 05:23:32 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:28.777 05:23:32 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:28.777 05:23:32 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:28.777 05:23:32 -- setup/hugepages.sh@207 -- # get_nodes 00:04:28.777 05:23:32 -- setup/hugepages.sh@27 -- # local node 00:04:28.777 05:23:32 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:28.777 05:23:32 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:28.777 05:23:32 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:28.777 05:23:32 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:28.777 05:23:32 -- setup/hugepages.sh@208 -- # clear_hp 00:04:28.777 05:23:32 -- setup/hugepages.sh@37 -- # local node hp 00:04:28.777 05:23:32 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:28.777 05:23:32 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:28.777 05:23:32 -- setup/hugepages.sh@41 -- # echo 0 00:04:28.777 05:23:32 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:28.777 05:23:32 -- setup/hugepages.sh@41 -- # echo 0 00:04:28.777 05:23:32 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:28.777 05:23:32 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:28.777 05:23:32 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:28.777 05:23:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:28.777 05:23:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:28.777 05:23:32 -- common/autotest_common.sh@10 -- # set +x 00:04:28.777 ************************************ 00:04:28.777 START TEST default_setup 00:04:28.777 ************************************ 00:04:28.777 05:23:32 -- common/autotest_common.sh@1104 -- # default_setup 00:04:28.777 05:23:32 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:28.777 05:23:32 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:28.777 05:23:32 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:28.777 05:23:32 -- setup/hugepages.sh@51 -- # shift 00:04:28.777 05:23:32 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:28.777 05:23:32 -- setup/hugepages.sh@52 -- # local node_ids 00:04:28.777 05:23:32 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:28.777 05:23:32 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:28.777 05:23:32 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:28.777 05:23:32 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:28.777 05:23:32 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:28.777 05:23:32 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:28.777 05:23:32 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:28.777 05:23:32 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:28.777 05:23:32 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:28.777 05:23:32 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:28.777 05:23:32 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:28.777 05:23:32 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:28.777 05:23:32 -- setup/hugepages.sh@73 -- # return 0 00:04:28.777 05:23:32 -- setup/hugepages.sh@137 -- # setup output 00:04:28.777 05:23:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.777 05:23:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:29.036 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:29.294 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:29.862 05:23:33 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:29.862 05:23:33 -- setup/hugepages.sh@89 -- # local node 00:04:29.862 05:23:33 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:29.862 05:23:33 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:29.862 05:23:33 -- setup/hugepages.sh@92 -- # local surp 00:04:29.862 05:23:33 -- setup/hugepages.sh@93 -- # local resv 00:04:29.862 05:23:33 -- setup/hugepages.sh@94 -- # local anon 00:04:29.862 05:23:33 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:29.862 05:23:33 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:29.862 05:23:33 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:29.862 05:23:33 -- setup/common.sh@18 -- # local node= 00:04:29.862 05:23:33 -- setup/common.sh@19 -- # local var val 00:04:29.862 05:23:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:29.862 05:23:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.862 05:23:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.862 05:23:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.862 05:23:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.862 05:23:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.862 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.862 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.862 05:23:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5063432 kB' 'MemAvailable: 9491356 kB' 'Buffers: 35132 kB' 'Cached: 4531788 kB' 'SwapCached: 0 kB' 'Active: 998420 kB' 'Inactive: 3702436 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 144544 kB' 'Active(file): 997368 kB' 'Inactive(file): 3557892 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'AnonPages: 163164 kB' 'Mapped: 68064 kB' 'Shmem: 2596 kB' 'KReclaimable: 194180 kB' 'Slab: 258208 kB' 'SReclaimable: 194180 kB' 'SUnreclaim: 64028 kB' 'KernelStack: 4368 kB' 'PageTables: 3656 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 510844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:29.862 05:23:33 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.862 05:23:33 -- setup/common.sh@32 -- # continue 00:04:29.862 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.862 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.862 05:23:33 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.124 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.124 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.124 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.124 05:23:33 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.124 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.124 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.124 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.124 05:23:33 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.124 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.124 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.124 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.124 05:23:33 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.124 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.124 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.124 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.124 05:23:33 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.124 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.124 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.124 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.124 05:23:33 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.124 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.124 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.124 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.124 05:23:33 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.124 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.124 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.124 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.124 05:23:33 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.124 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.124 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.124 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.124 05:23:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.124 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.124 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.124 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.124 05:23:33 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.124 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.124 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.124 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.124 05:23:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.124 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.124 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.124 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.124 05:23:33 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.124 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.124 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.124 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.124 05:23:33 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.124 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.124 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.124 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.124 05:23:33 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.124 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.124 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.124 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.124 05:23:33 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.124 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.124 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.124 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.125 05:23:33 -- setup/common.sh@33 -- # echo 0 00:04:30.125 05:23:33 -- setup/common.sh@33 -- # return 0 00:04:30.125 05:23:33 -- setup/hugepages.sh@97 -- # anon=0 00:04:30.125 05:23:33 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:30.125 05:23:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.125 05:23:33 -- setup/common.sh@18 -- # local node= 00:04:30.125 05:23:33 -- setup/common.sh@19 -- # local var val 00:04:30.125 05:23:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.125 05:23:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.125 05:23:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.125 05:23:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.125 05:23:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.125 05:23:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.125 05:23:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5063432 kB' 'MemAvailable: 9491356 kB' 'Buffers: 35132 kB' 'Cached: 4531788 kB' 'SwapCached: 0 kB' 'Active: 998420 kB' 'Inactive: 3702400 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 144508 kB' 'Active(file): 997368 kB' 'Inactive(file): 3557892 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'AnonPages: 163128 kB' 'Mapped: 68064 kB' 'Shmem: 2596 kB' 'KReclaimable: 194180 kB' 'Slab: 258208 kB' 'SReclaimable: 194180 kB' 'SUnreclaim: 64028 kB' 'KernelStack: 4352 kB' 'PageTables: 3616 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 510844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.125 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.125 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.126 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.126 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.127 05:23:33 -- setup/common.sh@33 -- # echo 0 00:04:30.127 05:23:33 -- setup/common.sh@33 -- # return 0 00:04:30.127 05:23:33 -- setup/hugepages.sh@99 -- # surp=0 00:04:30.127 05:23:33 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:30.127 05:23:33 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:30.127 05:23:33 -- setup/common.sh@18 -- # local node= 00:04:30.127 05:23:33 -- setup/common.sh@19 -- # local var val 00:04:30.127 05:23:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.127 05:23:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.127 05:23:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.127 05:23:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.127 05:23:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.127 05:23:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.127 05:23:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5063432 kB' 'MemAvailable: 9491356 kB' 'Buffers: 35132 kB' 'Cached: 4531788 kB' 'SwapCached: 0 kB' 'Active: 998412 kB' 'Inactive: 3702280 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 144388 kB' 'Active(file): 997368 kB' 'Inactive(file): 3557892 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'AnonPages: 163024 kB' 'Mapped: 68060 kB' 'Shmem: 2596 kB' 'KReclaimable: 194180 kB' 'Slab: 258224 kB' 'SReclaimable: 194180 kB' 'SUnreclaim: 64044 kB' 'KernelStack: 4384 kB' 'PageTables: 3684 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 510844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.127 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.127 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.128 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.128 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.129 05:23:33 -- setup/common.sh@33 -- # echo 0 00:04:30.129 05:23:33 -- setup/common.sh@33 -- # return 0 00:04:30.129 05:23:33 -- setup/hugepages.sh@100 -- # resv=0 00:04:30.129 05:23:33 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:30.129 nr_hugepages=1024 00:04:30.129 resv_hugepages=0 00:04:30.129 05:23:33 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:30.129 surplus_hugepages=0 00:04:30.129 05:23:33 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:30.129 anon_hugepages=0 00:04:30.129 05:23:33 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:30.129 05:23:33 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.129 05:23:33 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:30.129 05:23:33 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:30.129 05:23:33 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:30.129 05:23:33 -- setup/common.sh@18 -- # local node= 00:04:30.129 05:23:33 -- setup/common.sh@19 -- # local var val 00:04:30.129 05:23:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.129 05:23:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.129 05:23:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.129 05:23:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.129 05:23:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.129 05:23:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.129 05:23:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5063432 kB' 'MemAvailable: 9491356 kB' 'Buffers: 35132 kB' 'Cached: 4531788 kB' 'SwapCached: 0 kB' 'Active: 998412 kB' 'Inactive: 3702280 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 144388 kB' 'Active(file): 997368 kB' 'Inactive(file): 3557892 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'AnonPages: 163024 kB' 'Mapped: 68060 kB' 'Shmem: 2596 kB' 'KReclaimable: 194180 kB' 'Slab: 258224 kB' 'SReclaimable: 194180 kB' 'SUnreclaim: 64044 kB' 'KernelStack: 4384 kB' 'PageTables: 3684 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 510844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.129 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.129 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.130 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.130 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.131 05:23:33 -- setup/common.sh@33 -- # echo 1024 00:04:30.131 05:23:33 -- setup/common.sh@33 -- # return 0 00:04:30.131 05:23:33 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.131 05:23:33 -- setup/hugepages.sh@112 -- # get_nodes 00:04:30.131 05:23:33 -- setup/hugepages.sh@27 -- # local node 00:04:30.131 05:23:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.131 05:23:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:30.131 05:23:33 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:30.131 05:23:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:30.131 05:23:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:30.131 05:23:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:30.131 05:23:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:30.131 05:23:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.131 05:23:33 -- setup/common.sh@18 -- # local node=0 00:04:30.131 05:23:33 -- setup/common.sh@19 -- # local var val 00:04:30.131 05:23:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.131 05:23:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.131 05:23:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:30.131 05:23:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:30.131 05:23:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.131 05:23:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.131 05:23:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5063432 kB' 'MemUsed: 7179540 kB' 'SwapCached: 0 kB' 'Active: 998412 kB' 'Inactive: 3702280 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 144388 kB' 'Active(file): 997368 kB' 'Inactive(file): 3557892 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'FilePages: 4566920 kB' 'Mapped: 68060 kB' 'AnonPages: 163024 kB' 'Shmem: 2596 kB' 'KernelStack: 4452 kB' 'PageTables: 3684 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 194180 kB' 'Slab: 258224 kB' 'SReclaimable: 194180 kB' 'SUnreclaim: 64044 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.131 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.131 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.132 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.132 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.132 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.132 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.132 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.132 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.132 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.132 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.132 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.132 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.132 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.132 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.132 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.132 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.132 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.132 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.132 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.132 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.132 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.132 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.132 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.132 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.132 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.132 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.132 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.132 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.132 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.132 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # continue 00:04:30.132 05:23:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.132 05:23:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.132 05:23:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.132 05:23:33 -- setup/common.sh@33 -- # echo 0 00:04:30.132 05:23:33 -- setup/common.sh@33 -- # return 0 00:04:30.132 05:23:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:30.132 05:23:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:30.132 05:23:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:30.132 05:23:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:30.132 node0=1024 expecting 1024 00:04:30.132 05:23:33 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:30.132 05:23:33 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:30.132 00:04:30.132 real 0m1.363s 00:04:30.132 user 0m0.316s 00:04:30.132 sys 0m1.044s 00:04:30.132 05:23:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.132 05:23:33 -- common/autotest_common.sh@10 -- # set +x 00:04:30.132 ************************************ 00:04:30.132 END TEST default_setup 00:04:30.132 ************************************ 00:04:30.132 05:23:33 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:30.132 05:23:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:30.132 05:23:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:30.132 05:23:33 -- common/autotest_common.sh@10 -- # set +x 00:04:30.132 ************************************ 00:04:30.132 START TEST per_node_1G_alloc 00:04:30.132 ************************************ 00:04:30.132 05:23:34 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:30.132 05:23:34 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:30.132 05:23:34 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:30.132 05:23:34 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:30.132 05:23:34 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:30.132 05:23:34 -- setup/hugepages.sh@51 -- # shift 00:04:30.132 05:23:34 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:30.132 05:23:34 -- setup/hugepages.sh@52 -- # local node_ids 00:04:30.132 05:23:34 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:30.132 05:23:34 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:30.132 05:23:34 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:30.132 05:23:34 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:30.132 05:23:34 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:30.132 05:23:34 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:30.132 05:23:34 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:30.132 05:23:34 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:30.132 05:23:34 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:30.132 05:23:34 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:30.132 05:23:34 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:30.132 05:23:34 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:30.132 05:23:34 -- setup/hugepages.sh@73 -- # return 0 00:04:30.132 05:23:34 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:30.132 05:23:34 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:30.132 05:23:34 -- setup/hugepages.sh@146 -- # setup output 00:04:30.132 05:23:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.132 05:23:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:30.392 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:30.392 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:30.963 05:23:34 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:30.963 05:23:34 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:30.963 05:23:34 -- setup/hugepages.sh@89 -- # local node 00:04:30.963 05:23:34 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:30.963 05:23:34 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:30.963 05:23:34 -- setup/hugepages.sh@92 -- # local surp 00:04:30.963 05:23:34 -- setup/hugepages.sh@93 -- # local resv 00:04:30.963 05:23:34 -- setup/hugepages.sh@94 -- # local anon 00:04:30.963 05:23:34 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:30.963 05:23:34 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:30.963 05:23:34 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:30.963 05:23:34 -- setup/common.sh@18 -- # local node= 00:04:30.963 05:23:34 -- setup/common.sh@19 -- # local var val 00:04:30.963 05:23:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.963 05:23:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.963 05:23:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.963 05:23:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.963 05:23:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.963 05:23:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.963 05:23:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 6110612 kB' 'MemAvailable: 10538536 kB' 'Buffers: 35132 kB' 'Cached: 4531788 kB' 'SwapCached: 0 kB' 'Active: 998448 kB' 'Inactive: 3702872 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 145004 kB' 'Active(file): 997392 kB' 'Inactive(file): 3557868 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'AnonPages: 163552 kB' 'Mapped: 68140 kB' 'Shmem: 2596 kB' 'KReclaimable: 194180 kB' 'Slab: 258072 kB' 'SReclaimable: 194180 kB' 'SUnreclaim: 63892 kB' 'KernelStack: 4492 kB' 'PageTables: 4044 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 510844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.963 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.963 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.964 05:23:34 -- setup/common.sh@33 -- # echo 0 00:04:30.964 05:23:34 -- setup/common.sh@33 -- # return 0 00:04:30.964 05:23:34 -- setup/hugepages.sh@97 -- # anon=0 00:04:30.964 05:23:34 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:30.964 05:23:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.964 05:23:34 -- setup/common.sh@18 -- # local node= 00:04:30.964 05:23:34 -- setup/common.sh@19 -- # local var val 00:04:30.964 05:23:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.964 05:23:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.964 05:23:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.964 05:23:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.964 05:23:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.964 05:23:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.964 05:23:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 6110864 kB' 'MemAvailable: 10538788 kB' 'Buffers: 35132 kB' 'Cached: 4531792 kB' 'SwapCached: 0 kB' 'Active: 998452 kB' 'Inactive: 3702664 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 144800 kB' 'Active(file): 997396 kB' 'Inactive(file): 3557864 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'AnonPages: 163484 kB' 'Mapped: 68180 kB' 'Shmem: 2596 kB' 'KReclaimable: 194180 kB' 'Slab: 258080 kB' 'SReclaimable: 194180 kB' 'SUnreclaim: 63900 kB' 'KernelStack: 4404 kB' 'PageTables: 3788 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 510844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.964 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.964 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.965 05:23:34 -- setup/common.sh@33 -- # echo 0 00:04:30.965 05:23:34 -- setup/common.sh@33 -- # return 0 00:04:30.965 05:23:34 -- setup/hugepages.sh@99 -- # surp=0 00:04:30.965 05:23:34 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:30.965 05:23:34 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:30.965 05:23:34 -- setup/common.sh@18 -- # local node= 00:04:30.965 05:23:34 -- setup/common.sh@19 -- # local var val 00:04:30.965 05:23:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.965 05:23:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.965 05:23:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.965 05:23:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.965 05:23:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.965 05:23:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.965 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.965 05:23:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 6111060 kB' 'MemAvailable: 10538984 kB' 'Buffers: 35132 kB' 'Cached: 4531792 kB' 'SwapCached: 0 kB' 'Active: 998448 kB' 'Inactive: 3702476 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 144612 kB' 'Active(file): 997396 kB' 'Inactive(file): 3557864 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'AnonPages: 163260 kB' 'Mapped: 68024 kB' 'Shmem: 2596 kB' 'KReclaimable: 194180 kB' 'Slab: 258008 kB' 'SReclaimable: 194180 kB' 'SUnreclaim: 63828 kB' 'KernelStack: 4384 kB' 'PageTables: 3692 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 510844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.965 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.966 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.966 05:23:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.967 05:23:34 -- setup/common.sh@33 -- # echo 0 00:04:30.967 05:23:34 -- setup/common.sh@33 -- # return 0 00:04:30.967 05:23:34 -- setup/hugepages.sh@100 -- # resv=0 00:04:30.967 nr_hugepages=512 00:04:30.967 05:23:34 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:30.967 resv_hugepages=0 00:04:30.967 05:23:34 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:30.967 surplus_hugepages=0 00:04:30.967 05:23:34 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:30.967 anon_hugepages=0 00:04:30.967 05:23:34 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:30.967 05:23:34 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:30.967 05:23:34 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:30.967 05:23:34 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:30.967 05:23:34 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:30.967 05:23:34 -- setup/common.sh@18 -- # local node= 00:04:30.967 05:23:34 -- setup/common.sh@19 -- # local var val 00:04:30.967 05:23:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.967 05:23:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.967 05:23:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.967 05:23:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.967 05:23:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.967 05:23:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.967 05:23:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 6111060 kB' 'MemAvailable: 10538984 kB' 'Buffers: 35132 kB' 'Cached: 4531792 kB' 'SwapCached: 0 kB' 'Active: 998448 kB' 'Inactive: 3702216 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 144352 kB' 'Active(file): 997396 kB' 'Inactive(file): 3557864 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'AnonPages: 163000 kB' 'Mapped: 68024 kB' 'Shmem: 2596 kB' 'KReclaimable: 194180 kB' 'Slab: 258008 kB' 'SReclaimable: 194180 kB' 'SUnreclaim: 63828 kB' 'KernelStack: 4384 kB' 'PageTables: 3692 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 510844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19524 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.967 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.967 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.968 05:23:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.968 05:23:34 -- setup/common.sh@33 -- # echo 512 00:04:30.968 05:23:34 -- setup/common.sh@33 -- # return 0 00:04:30.968 05:23:34 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:30.968 05:23:34 -- setup/hugepages.sh@112 -- # get_nodes 00:04:30.968 05:23:34 -- setup/hugepages.sh@27 -- # local node 00:04:30.968 05:23:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.968 05:23:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:30.968 05:23:34 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:30.968 05:23:34 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:30.968 05:23:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:30.968 05:23:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:30.968 05:23:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:30.968 05:23:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.968 05:23:34 -- setup/common.sh@18 -- # local node=0 00:04:30.968 05:23:34 -- setup/common.sh@19 -- # local var val 00:04:30.968 05:23:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.968 05:23:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.968 05:23:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:30.968 05:23:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:30.968 05:23:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.968 05:23:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.968 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 6111060 kB' 'MemUsed: 6131912 kB' 'SwapCached: 0 kB' 'Active: 998448 kB' 'Inactive: 3702216 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 144352 kB' 'Active(file): 997396 kB' 'Inactive(file): 3557864 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'FilePages: 4566924 kB' 'Mapped: 68024 kB' 'AnonPages: 163000 kB' 'Shmem: 2596 kB' 'KernelStack: 4452 kB' 'PageTables: 3692 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 194180 kB' 'Slab: 258008 kB' 'SReclaimable: 194180 kB' 'SUnreclaim: 63828 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # continue 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.969 05:23:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.969 05:23:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.969 05:23:34 -- setup/common.sh@33 -- # echo 0 00:04:30.969 05:23:34 -- setup/common.sh@33 -- # return 0 00:04:30.969 05:23:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:30.969 05:23:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:30.969 05:23:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:30.969 05:23:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:30.969 node0=512 expecting 512 00:04:30.969 05:23:34 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:30.969 05:23:34 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:30.969 00:04:30.969 real 0m0.841s 00:04:30.969 user 0m0.343s 00:04:30.969 sys 0m0.538s 00:04:30.969 05:23:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.969 05:23:34 -- common/autotest_common.sh@10 -- # set +x 00:04:30.969 ************************************ 00:04:30.969 END TEST per_node_1G_alloc 00:04:30.969 ************************************ 00:04:30.969 05:23:34 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:30.969 05:23:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:30.969 05:23:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:30.969 05:23:34 -- common/autotest_common.sh@10 -- # set +x 00:04:30.970 ************************************ 00:04:30.970 START TEST even_2G_alloc 00:04:30.970 ************************************ 00:04:30.970 05:23:34 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:30.970 05:23:34 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:30.970 05:23:34 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:30.970 05:23:34 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:30.970 05:23:34 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:30.970 05:23:34 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:30.970 05:23:34 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:30.970 05:23:34 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:30.970 05:23:34 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:30.970 05:23:34 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:30.970 05:23:34 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:30.970 05:23:34 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:30.970 05:23:34 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:30.970 05:23:34 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:30.970 05:23:34 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:30.970 05:23:34 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:30.970 05:23:34 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:30.970 05:23:34 -- setup/hugepages.sh@83 -- # : 0 00:04:30.970 05:23:34 -- setup/hugepages.sh@84 -- # : 0 00:04:30.970 05:23:34 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:30.970 05:23:34 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:30.970 05:23:34 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:30.970 05:23:34 -- setup/hugepages.sh@153 -- # setup output 00:04:30.970 05:23:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.970 05:23:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:31.228 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:31.487 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:32.058 05:23:35 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:32.058 05:23:35 -- setup/hugepages.sh@89 -- # local node 00:04:32.058 05:23:35 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:32.058 05:23:35 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:32.058 05:23:35 -- setup/hugepages.sh@92 -- # local surp 00:04:32.058 05:23:35 -- setup/hugepages.sh@93 -- # local resv 00:04:32.058 05:23:35 -- setup/hugepages.sh@94 -- # local anon 00:04:32.058 05:23:35 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:32.058 05:23:35 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:32.058 05:23:35 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:32.058 05:23:35 -- setup/common.sh@18 -- # local node= 00:04:32.058 05:23:35 -- setup/common.sh@19 -- # local var val 00:04:32.058 05:23:35 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.058 05:23:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.058 05:23:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.058 05:23:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.058 05:23:35 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.058 05:23:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 05:23:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5064340 kB' 'MemAvailable: 9492264 kB' 'Buffers: 35132 kB' 'Cached: 4531792 kB' 'SwapCached: 0 kB' 'Active: 998452 kB' 'Inactive: 3702348 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 144488 kB' 'Active(file): 997400 kB' 'Inactive(file): 3557860 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'AnonPages: 163336 kB' 'Mapped: 68044 kB' 'Shmem: 2596 kB' 'KReclaimable: 194180 kB' 'Slab: 257780 kB' 'SReclaimable: 194180 kB' 'SUnreclaim: 63600 kB' 'KernelStack: 4360 kB' 'PageTables: 3732 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 510844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.058 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.059 05:23:35 -- setup/common.sh@33 -- # echo 0 00:04:32.059 05:23:35 -- setup/common.sh@33 -- # return 0 00:04:32.059 05:23:35 -- setup/hugepages.sh@97 -- # anon=0 00:04:32.059 05:23:35 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:32.059 05:23:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.059 05:23:35 -- setup/common.sh@18 -- # local node= 00:04:32.059 05:23:35 -- setup/common.sh@19 -- # local var val 00:04:32.059 05:23:35 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.059 05:23:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.059 05:23:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.059 05:23:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.059 05:23:35 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.059 05:23:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5064340 kB' 'MemAvailable: 9492264 kB' 'Buffers: 35132 kB' 'Cached: 4531792 kB' 'SwapCached: 0 kB' 'Active: 998452 kB' 'Inactive: 3702608 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 144748 kB' 'Active(file): 997400 kB' 'Inactive(file): 3557860 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'AnonPages: 163336 kB' 'Mapped: 68044 kB' 'Shmem: 2596 kB' 'KReclaimable: 194180 kB' 'Slab: 257780 kB' 'SReclaimable: 194180 kB' 'SUnreclaim: 63600 kB' 'KernelStack: 4428 kB' 'PageTables: 3732 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 510844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.059 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.059 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.060 05:23:35 -- setup/common.sh@33 -- # echo 0 00:04:32.060 05:23:35 -- setup/common.sh@33 -- # return 0 00:04:32.060 05:23:35 -- setup/hugepages.sh@99 -- # surp=0 00:04:32.060 05:23:35 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:32.060 05:23:35 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:32.060 05:23:35 -- setup/common.sh@18 -- # local node= 00:04:32.060 05:23:35 -- setup/common.sh@19 -- # local var val 00:04:32.060 05:23:35 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.060 05:23:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.060 05:23:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.060 05:23:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.060 05:23:35 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.060 05:23:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5064088 kB' 'MemAvailable: 9492012 kB' 'Buffers: 35132 kB' 'Cached: 4531792 kB' 'SwapCached: 0 kB' 'Active: 998444 kB' 'Inactive: 3702284 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 144424 kB' 'Active(file): 997400 kB' 'Inactive(file): 3557860 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'AnonPages: 163020 kB' 'Mapped: 68036 kB' 'Shmem: 2596 kB' 'KReclaimable: 194180 kB' 'Slab: 257804 kB' 'SReclaimable: 194180 kB' 'SUnreclaim: 63624 kB' 'KernelStack: 4400 kB' 'PageTables: 3876 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 510844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.060 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.060 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.061 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.061 05:23:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.061 05:23:35 -- setup/common.sh@33 -- # echo 0 00:04:32.061 05:23:35 -- setup/common.sh@33 -- # return 0 00:04:32.061 05:23:35 -- setup/hugepages.sh@100 -- # resv=0 00:04:32.061 nr_hugepages=1024 00:04:32.061 05:23:35 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:32.061 resv_hugepages=0 00:04:32.061 05:23:35 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:32.061 surplus_hugepages=0 00:04:32.061 05:23:35 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:32.061 anon_hugepages=0 00:04:32.061 05:23:35 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:32.061 05:23:35 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:32.061 05:23:35 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:32.061 05:23:35 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:32.061 05:23:35 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:32.061 05:23:35 -- setup/common.sh@18 -- # local node= 00:04:32.061 05:23:35 -- setup/common.sh@19 -- # local var val 00:04:32.061 05:23:35 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.061 05:23:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.061 05:23:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.061 05:23:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.062 05:23:35 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.062 05:23:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 05:23:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5064088 kB' 'MemAvailable: 9492012 kB' 'Buffers: 35132 kB' 'Cached: 4531792 kB' 'SwapCached: 0 kB' 'Active: 998444 kB' 'Inactive: 3702220 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 144360 kB' 'Active(file): 997400 kB' 'Inactive(file): 3557860 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'AnonPages: 163216 kB' 'Mapped: 68036 kB' 'Shmem: 2596 kB' 'KReclaimable: 194180 kB' 'Slab: 257804 kB' 'SReclaimable: 194180 kB' 'SUnreclaim: 63624 kB' 'KernelStack: 4384 kB' 'PageTables: 3836 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 510844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.062 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.062 05:23:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.063 05:23:35 -- setup/common.sh@33 -- # echo 1024 00:04:32.063 05:23:35 -- setup/common.sh@33 -- # return 0 00:04:32.063 05:23:35 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:32.063 05:23:35 -- setup/hugepages.sh@112 -- # get_nodes 00:04:32.063 05:23:35 -- setup/hugepages.sh@27 -- # local node 00:04:32.063 05:23:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.063 05:23:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:32.063 05:23:35 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:32.063 05:23:35 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:32.063 05:23:35 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:32.063 05:23:35 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:32.063 05:23:35 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:32.063 05:23:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.063 05:23:35 -- setup/common.sh@18 -- # local node=0 00:04:32.063 05:23:35 -- setup/common.sh@19 -- # local var val 00:04:32.063 05:23:35 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.063 05:23:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.063 05:23:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:32.063 05:23:35 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:32.063 05:23:35 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.063 05:23:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5063332 kB' 'MemUsed: 7179640 kB' 'SwapCached: 0 kB' 'Active: 998444 kB' 'Inactive: 3702008 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 144144 kB' 'Active(file): 997400 kB' 'Inactive(file): 3557864 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 672 kB' 'Writeback: 0 kB' 'FilePages: 4566924 kB' 'Mapped: 68024 kB' 'AnonPages: 162836 kB' 'Shmem: 2596 kB' 'KernelStack: 4436 kB' 'PageTables: 3636 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 194180 kB' 'Slab: 257764 kB' 'SReclaimable: 194180 kB' 'SUnreclaim: 63584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.063 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.063 05:23:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.064 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.064 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 05:23:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.064 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.064 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 05:23:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.064 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.064 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 05:23:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.064 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.064 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 05:23:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.064 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.064 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 05:23:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.064 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.064 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 05:23:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.064 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.064 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 05:23:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.064 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.064 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 05:23:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.064 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.064 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 05:23:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.064 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.064 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 05:23:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.064 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.064 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 05:23:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.064 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.064 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 05:23:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.064 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.064 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 05:23:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.064 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.064 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 05:23:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.064 05:23:35 -- setup/common.sh@32 -- # continue 00:04:32.064 05:23:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.064 05:23:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.064 05:23:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.064 05:23:35 -- setup/common.sh@33 -- # echo 0 00:04:32.064 05:23:35 -- setup/common.sh@33 -- # return 0 00:04:32.064 05:23:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:32.064 05:23:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:32.064 05:23:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:32.064 05:23:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:32.064 node0=1024 expecting 1024 00:04:32.064 05:23:35 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:32.064 05:23:35 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:32.064 00:04:32.064 real 0m0.959s 00:04:32.064 user 0m0.316s 00:04:32.064 sys 0m0.683s 00:04:32.064 05:23:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.064 05:23:35 -- common/autotest_common.sh@10 -- # set +x 00:04:32.064 ************************************ 00:04:32.064 END TEST even_2G_alloc 00:04:32.064 ************************************ 00:04:32.064 05:23:35 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:32.064 05:23:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:32.064 05:23:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:32.064 05:23:35 -- common/autotest_common.sh@10 -- # set +x 00:04:32.064 ************************************ 00:04:32.064 START TEST odd_alloc 00:04:32.064 ************************************ 00:04:32.064 05:23:35 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:32.064 05:23:35 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:32.064 05:23:35 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:32.064 05:23:35 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:32.064 05:23:35 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:32.064 05:23:35 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:32.064 05:23:35 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:32.064 05:23:35 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:32.064 05:23:35 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:32.064 05:23:35 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:32.064 05:23:35 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:32.064 05:23:35 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:32.064 05:23:35 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:32.064 05:23:35 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:32.064 05:23:35 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:32.064 05:23:35 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:32.064 05:23:35 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:32.064 05:23:35 -- setup/hugepages.sh@83 -- # : 0 00:04:32.064 05:23:35 -- setup/hugepages.sh@84 -- # : 0 00:04:32.064 05:23:35 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:32.064 05:23:35 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:32.064 05:23:35 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:32.064 05:23:35 -- setup/hugepages.sh@160 -- # setup output 00:04:32.064 05:23:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.064 05:23:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:32.323 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:32.323 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.262 05:23:36 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:33.262 05:23:36 -- setup/hugepages.sh@89 -- # local node 00:04:33.262 05:23:36 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:33.262 05:23:36 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:33.262 05:23:36 -- setup/hugepages.sh@92 -- # local surp 00:04:33.262 05:23:36 -- setup/hugepages.sh@93 -- # local resv 00:04:33.262 05:23:36 -- setup/hugepages.sh@94 -- # local anon 00:04:33.262 05:23:36 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:33.262 05:23:36 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:33.262 05:23:36 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:33.262 05:23:36 -- setup/common.sh@18 -- # local node= 00:04:33.262 05:23:36 -- setup/common.sh@19 -- # local var val 00:04:33.262 05:23:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:33.263 05:23:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.263 05:23:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.263 05:23:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.263 05:23:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.263 05:23:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5064004 kB' 'MemAvailable: 9491916 kB' 'Buffers: 35140 kB' 'Cached: 4531792 kB' 'SwapCached: 0 kB' 'Active: 998452 kB' 'Inactive: 3698108 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 140244 kB' 'Active(file): 997400 kB' 'Inactive(file): 3557864 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 158872 kB' 'Mapped: 67212 kB' 'Shmem: 2596 kB' 'KReclaimable: 194164 kB' 'Slab: 258044 kB' 'SReclaimable: 194164 kB' 'SUnreclaim: 63880 kB' 'KernelStack: 4368 kB' 'PageTables: 3612 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071884 kB' 'Committed_AS: 498076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19428 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.263 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.263 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:36 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.264 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.264 05:23:36 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.264 05:23:36 -- setup/common.sh@33 -- # echo 0 00:04:33.264 05:23:36 -- setup/common.sh@33 -- # return 0 00:04:33.264 05:23:37 -- setup/hugepages.sh@97 -- # anon=0 00:04:33.264 05:23:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:33.264 05:23:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.264 05:23:37 -- setup/common.sh@18 -- # local node= 00:04:33.264 05:23:37 -- setup/common.sh@19 -- # local var val 00:04:33.264 05:23:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:33.264 05:23:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.264 05:23:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.264 05:23:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.264 05:23:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.264 05:23:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5064004 kB' 'MemAvailable: 9491916 kB' 'Buffers: 35140 kB' 'Cached: 4531792 kB' 'SwapCached: 0 kB' 'Active: 998452 kB' 'Inactive: 3697848 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 139984 kB' 'Active(file): 997400 kB' 'Inactive(file): 3557864 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 158872 kB' 'Mapped: 67212 kB' 'Shmem: 2596 kB' 'KReclaimable: 194164 kB' 'Slab: 258044 kB' 'SReclaimable: 194164 kB' 'SUnreclaim: 63880 kB' 'KernelStack: 4368 kB' 'PageTables: 3612 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071884 kB' 'Committed_AS: 498076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19428 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.264 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.264 05:23:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.265 05:23:37 -- setup/common.sh@33 -- # echo 0 00:04:33.265 05:23:37 -- setup/common.sh@33 -- # return 0 00:04:33.265 05:23:37 -- setup/hugepages.sh@99 -- # surp=0 00:04:33.265 05:23:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:33.265 05:23:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:33.265 05:23:37 -- setup/common.sh@18 -- # local node= 00:04:33.265 05:23:37 -- setup/common.sh@19 -- # local var val 00:04:33.265 05:23:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:33.265 05:23:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.265 05:23:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.265 05:23:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.265 05:23:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.265 05:23:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5064004 kB' 'MemAvailable: 9491916 kB' 'Buffers: 35140 kB' 'Cached: 4531792 kB' 'SwapCached: 0 kB' 'Active: 998452 kB' 'Inactive: 3697620 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 139756 kB' 'Active(file): 997400 kB' 'Inactive(file): 3557864 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 158384 kB' 'Mapped: 67212 kB' 'Shmem: 2596 kB' 'KReclaimable: 194164 kB' 'Slab: 258012 kB' 'SReclaimable: 194164 kB' 'SUnreclaim: 63848 kB' 'KernelStack: 4336 kB' 'PageTables: 3528 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071884 kB' 'Committed_AS: 498076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19428 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.265 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.265 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.266 05:23:37 -- setup/common.sh@33 -- # echo 0 00:04:33.266 05:23:37 -- setup/common.sh@33 -- # return 0 00:04:33.266 05:23:37 -- setup/hugepages.sh@100 -- # resv=0 00:04:33.266 nr_hugepages=1025 00:04:33.266 05:23:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:33.266 resv_hugepages=0 00:04:33.266 05:23:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:33.266 surplus_hugepages=0 00:04:33.266 05:23:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:33.266 anon_hugepages=0 00:04:33.266 05:23:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:33.266 05:23:37 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:33.266 05:23:37 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:33.266 05:23:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:33.266 05:23:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:33.266 05:23:37 -- setup/common.sh@18 -- # local node= 00:04:33.266 05:23:37 -- setup/common.sh@19 -- # local var val 00:04:33.266 05:23:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:33.266 05:23:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.266 05:23:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.266 05:23:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.266 05:23:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.266 05:23:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5063752 kB' 'MemAvailable: 9491664 kB' 'Buffers: 35140 kB' 'Cached: 4531792 kB' 'SwapCached: 0 kB' 'Active: 998444 kB' 'Inactive: 3697800 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 139936 kB' 'Active(file): 997400 kB' 'Inactive(file): 3557864 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 158560 kB' 'Mapped: 67472 kB' 'Shmem: 2596 kB' 'KReclaimable: 194164 kB' 'Slab: 258028 kB' 'SReclaimable: 194164 kB' 'SUnreclaim: 63864 kB' 'KernelStack: 4436 kB' 'PageTables: 3608 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071884 kB' 'Committed_AS: 500680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19396 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.266 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.266 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.267 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.267 05:23:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.268 05:23:37 -- setup/common.sh@33 -- # echo 1025 00:04:33.268 05:23:37 -- setup/common.sh@33 -- # return 0 00:04:33.268 05:23:37 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:33.268 05:23:37 -- setup/hugepages.sh@112 -- # get_nodes 00:04:33.268 05:23:37 -- setup/hugepages.sh@27 -- # local node 00:04:33.268 05:23:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.268 05:23:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:33.268 05:23:37 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:33.268 05:23:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:33.268 05:23:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:33.268 05:23:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:33.268 05:23:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:33.268 05:23:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.268 05:23:37 -- setup/common.sh@18 -- # local node=0 00:04:33.268 05:23:37 -- setup/common.sh@19 -- # local var val 00:04:33.268 05:23:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:33.268 05:23:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.268 05:23:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:33.268 05:23:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:33.268 05:23:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.268 05:23:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5064016 kB' 'MemUsed: 7178956 kB' 'SwapCached: 0 kB' 'Active: 998444 kB' 'Inactive: 3697592 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 139728 kB' 'Active(file): 997400 kB' 'Inactive(file): 3557864 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'FilePages: 4566932 kB' 'Mapped: 67472 kB' 'AnonPages: 158392 kB' 'Shmem: 2596 kB' 'KernelStack: 4436 kB' 'PageTables: 3872 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 194164 kB' 'Slab: 258028 kB' 'SReclaimable: 194164 kB' 'SUnreclaim: 63864 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.268 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.268 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.269 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.269 05:23:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.269 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.269 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.269 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.269 05:23:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.269 05:23:37 -- setup/common.sh@32 -- # continue 00:04:33.269 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.269 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.269 05:23:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.269 05:23:37 -- setup/common.sh@33 -- # echo 0 00:04:33.269 05:23:37 -- setup/common.sh@33 -- # return 0 00:04:33.269 05:23:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:33.269 05:23:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:33.269 05:23:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:33.269 05:23:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:33.269 node0=1025 expecting 1025 00:04:33.269 05:23:37 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:33.269 05:23:37 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:33.269 00:04:33.269 real 0m1.187s 00:04:33.269 user 0m0.320s 00:04:33.269 sys 0m0.906s 00:04:33.269 05:23:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.269 05:23:37 -- common/autotest_common.sh@10 -- # set +x 00:04:33.269 ************************************ 00:04:33.269 END TEST odd_alloc 00:04:33.269 ************************************ 00:04:33.269 05:23:37 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:33.269 05:23:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:33.269 05:23:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:33.269 05:23:37 -- common/autotest_common.sh@10 -- # set +x 00:04:33.269 ************************************ 00:04:33.269 START TEST custom_alloc 00:04:33.269 ************************************ 00:04:33.269 05:23:37 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:33.269 05:23:37 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:33.269 05:23:37 -- setup/hugepages.sh@169 -- # local node 00:04:33.269 05:23:37 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:33.269 05:23:37 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:33.269 05:23:37 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:33.269 05:23:37 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:33.269 05:23:37 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:33.269 05:23:37 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:33.269 05:23:37 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:33.269 05:23:37 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:33.269 05:23:37 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:33.269 05:23:37 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:33.269 05:23:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:33.269 05:23:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:33.269 05:23:37 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:33.269 05:23:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:33.269 05:23:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:33.269 05:23:37 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:33.269 05:23:37 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:33.269 05:23:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:33.269 05:23:37 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:33.269 05:23:37 -- setup/hugepages.sh@83 -- # : 0 00:04:33.269 05:23:37 -- setup/hugepages.sh@84 -- # : 0 00:04:33.269 05:23:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:33.269 05:23:37 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:33.269 05:23:37 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:33.269 05:23:37 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:33.269 05:23:37 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:33.269 05:23:37 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:33.269 05:23:37 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:33.269 05:23:37 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:33.269 05:23:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:33.269 05:23:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:33.269 05:23:37 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:33.269 05:23:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:33.269 05:23:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:33.269 05:23:37 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:33.269 05:23:37 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:33.269 05:23:37 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:33.269 05:23:37 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:33.269 05:23:37 -- setup/hugepages.sh@78 -- # return 0 00:04:33.269 05:23:37 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:33.269 05:23:37 -- setup/hugepages.sh@187 -- # setup output 00:04:33.269 05:23:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.269 05:23:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:33.534 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:33.534 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:34.124 05:23:37 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:34.124 05:23:37 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:34.124 05:23:37 -- setup/hugepages.sh@89 -- # local node 00:04:34.124 05:23:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:34.124 05:23:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:34.124 05:23:37 -- setup/hugepages.sh@92 -- # local surp 00:04:34.124 05:23:37 -- setup/hugepages.sh@93 -- # local resv 00:04:34.124 05:23:37 -- setup/hugepages.sh@94 -- # local anon 00:04:34.124 05:23:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:34.124 05:23:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:34.124 05:23:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:34.124 05:23:37 -- setup/common.sh@18 -- # local node= 00:04:34.124 05:23:37 -- setup/common.sh@19 -- # local var val 00:04:34.124 05:23:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.124 05:23:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.124 05:23:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.124 05:23:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.124 05:23:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.124 05:23:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.124 05:23:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 6114712 kB' 'MemAvailable: 10542640 kB' 'Buffers: 35140 kB' 'Cached: 4531800 kB' 'SwapCached: 0 kB' 'Active: 998488 kB' 'Inactive: 3697380 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 139536 kB' 'Active(file): 997436 kB' 'Inactive(file): 3557844 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 158412 kB' 'Mapped: 67208 kB' 'Shmem: 2596 kB' 'KReclaimable: 194164 kB' 'Slab: 257812 kB' 'SReclaimable: 194164 kB' 'SUnreclaim: 63648 kB' 'KernelStack: 4288 kB' 'PageTables: 3396 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 498076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19412 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.124 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.124 05:23:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.125 05:23:37 -- setup/common.sh@33 -- # echo 0 00:04:34.125 05:23:37 -- setup/common.sh@33 -- # return 0 00:04:34.125 05:23:37 -- setup/hugepages.sh@97 -- # anon=0 00:04:34.125 05:23:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:34.125 05:23:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.125 05:23:37 -- setup/common.sh@18 -- # local node= 00:04:34.125 05:23:37 -- setup/common.sh@19 -- # local var val 00:04:34.125 05:23:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.125 05:23:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.125 05:23:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.125 05:23:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.125 05:23:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.125 05:23:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 6114712 kB' 'MemAvailable: 10542640 kB' 'Buffers: 35140 kB' 'Cached: 4531800 kB' 'SwapCached: 0 kB' 'Active: 998488 kB' 'Inactive: 3697380 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 139536 kB' 'Active(file): 997436 kB' 'Inactive(file): 3557844 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 158152 kB' 'Mapped: 67208 kB' 'Shmem: 2596 kB' 'KReclaimable: 194164 kB' 'Slab: 257812 kB' 'SReclaimable: 194164 kB' 'SUnreclaim: 63648 kB' 'KernelStack: 4288 kB' 'PageTables: 3396 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 498076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19412 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.125 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.125 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.126 05:23:37 -- setup/common.sh@33 -- # echo 0 00:04:34.126 05:23:37 -- setup/common.sh@33 -- # return 0 00:04:34.126 05:23:37 -- setup/hugepages.sh@99 -- # surp=0 00:04:34.126 05:23:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:34.126 05:23:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:34.126 05:23:37 -- setup/common.sh@18 -- # local node= 00:04:34.126 05:23:37 -- setup/common.sh@19 -- # local var val 00:04:34.126 05:23:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.126 05:23:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.126 05:23:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.126 05:23:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.126 05:23:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.126 05:23:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 6114712 kB' 'MemAvailable: 10542640 kB' 'Buffers: 35140 kB' 'Cached: 4531800 kB' 'SwapCached: 0 kB' 'Active: 998488 kB' 'Inactive: 3697380 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 139536 kB' 'Active(file): 997436 kB' 'Inactive(file): 3557844 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 158152 kB' 'Mapped: 67208 kB' 'Shmem: 2596 kB' 'KReclaimable: 194164 kB' 'Slab: 257812 kB' 'SReclaimable: 194164 kB' 'SUnreclaim: 63648 kB' 'KernelStack: 4288 kB' 'PageTables: 3396 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 498076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19428 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.126 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.126 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.127 05:23:37 -- setup/common.sh@33 -- # echo 0 00:04:34.127 05:23:37 -- setup/common.sh@33 -- # return 0 00:04:34.127 05:23:37 -- setup/hugepages.sh@100 -- # resv=0 00:04:34.127 nr_hugepages=512 00:04:34.127 05:23:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:34.127 resv_hugepages=0 00:04:34.127 05:23:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:34.127 surplus_hugepages=0 00:04:34.127 05:23:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:34.127 anon_hugepages=0 00:04:34.127 05:23:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:34.127 05:23:37 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:34.127 05:23:37 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:34.127 05:23:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:34.127 05:23:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:34.127 05:23:37 -- setup/common.sh@18 -- # local node= 00:04:34.127 05:23:37 -- setup/common.sh@19 -- # local var val 00:04:34.127 05:23:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.127 05:23:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.127 05:23:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.127 05:23:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.127 05:23:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.127 05:23:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.127 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.127 05:23:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 6114712 kB' 'MemAvailable: 10542640 kB' 'Buffers: 35140 kB' 'Cached: 4531800 kB' 'SwapCached: 0 kB' 'Active: 998488 kB' 'Inactive: 3697320 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 139476 kB' 'Active(file): 997436 kB' 'Inactive(file): 3557844 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 158352 kB' 'Mapped: 67208 kB' 'Shmem: 2596 kB' 'KReclaimable: 194164 kB' 'Slab: 257812 kB' 'SReclaimable: 194164 kB' 'SUnreclaim: 63648 kB' 'KernelStack: 4324 kB' 'PageTables: 3584 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 498076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19428 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.128 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.128 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.129 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.129 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.129 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.129 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.129 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.129 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.129 05:23:37 -- setup/common.sh@33 -- # echo 512 00:04:34.129 05:23:37 -- setup/common.sh@33 -- # return 0 00:04:34.129 05:23:37 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:34.129 05:23:37 -- setup/hugepages.sh@112 -- # get_nodes 00:04:34.129 05:23:37 -- setup/hugepages.sh@27 -- # local node 00:04:34.129 05:23:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.129 05:23:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:34.129 05:23:37 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:34.129 05:23:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:34.129 05:23:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.129 05:23:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.129 05:23:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:34.129 05:23:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.129 05:23:37 -- setup/common.sh@18 -- # local node=0 00:04:34.129 05:23:37 -- setup/common.sh@19 -- # local var val 00:04:34.129 05:23:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.129 05:23:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.129 05:23:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:34.129 05:23:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:34.129 05:23:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.129 05:23:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.129 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 6114712 kB' 'MemUsed: 6128260 kB' 'SwapCached: 0 kB' 'Active: 998488 kB' 'Inactive: 3697840 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 139996 kB' 'Active(file): 997436 kB' 'Inactive(file): 3557844 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'FilePages: 4566940 kB' 'Mapped: 67208 kB' 'AnonPages: 158612 kB' 'Shmem: 2596 kB' 'KernelStack: 4392 kB' 'PageTables: 3584 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 194164 kB' 'Slab: 257812 kB' 'SReclaimable: 194164 kB' 'SUnreclaim: 63648 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:34.129 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.129 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.129 05:23:37 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.129 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.129 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.130 05:23:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.130 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.130 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.130 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.130 05:23:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.130 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.130 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.130 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.130 05:23:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.130 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.130 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.130 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.130 05:23:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.130 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.130 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.130 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.130 05:23:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.130 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.130 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.130 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.130 05:23:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.130 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.130 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.130 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.130 05:23:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.130 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.130 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.130 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.130 05:23:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.130 05:23:38 -- setup/common.sh@32 -- # continue 00:04:34.130 05:23:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.130 05:23:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.130 05:23:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.130 05:23:38 -- setup/common.sh@33 -- # echo 0 00:04:34.130 05:23:38 -- setup/common.sh@33 -- # return 0 00:04:34.130 05:23:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.130 05:23:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.130 05:23:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.130 05:23:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.130 node0=512 expecting 512 00:04:34.130 05:23:38 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:34.130 05:23:38 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:34.130 00:04:34.130 real 0m0.861s 00:04:34.130 user 0m0.327s 00:04:34.130 sys 0m0.574s 00:04:34.130 05:23:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.130 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:04:34.130 ************************************ 00:04:34.130 END TEST custom_alloc 00:04:34.130 ************************************ 00:04:34.130 05:23:38 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:34.130 05:23:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:34.130 05:23:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:34.130 05:23:38 -- common/autotest_common.sh@10 -- # set +x 00:04:34.130 ************************************ 00:04:34.130 START TEST no_shrink_alloc 00:04:34.130 ************************************ 00:04:34.130 05:23:38 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:34.130 05:23:38 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:34.130 05:23:38 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:34.130 05:23:38 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:34.130 05:23:38 -- setup/hugepages.sh@51 -- # shift 00:04:34.130 05:23:38 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:34.130 05:23:38 -- setup/hugepages.sh@52 -- # local node_ids 00:04:34.130 05:23:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:34.130 05:23:38 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:34.130 05:23:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:34.130 05:23:38 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:34.130 05:23:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:34.130 05:23:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:34.130 05:23:38 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:34.130 05:23:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:34.130 05:23:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:34.130 05:23:38 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:34.130 05:23:38 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:34.130 05:23:38 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:34.130 05:23:38 -- setup/hugepages.sh@73 -- # return 0 00:04:34.130 05:23:38 -- setup/hugepages.sh@198 -- # setup output 00:04:34.130 05:23:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.130 05:23:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:34.696 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:34.696 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:35.296 05:23:39 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:35.296 05:23:39 -- setup/hugepages.sh@89 -- # local node 00:04:35.296 05:23:39 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:35.296 05:23:39 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:35.297 05:23:39 -- setup/hugepages.sh@92 -- # local surp 00:04:35.297 05:23:39 -- setup/hugepages.sh@93 -- # local resv 00:04:35.297 05:23:39 -- setup/hugepages.sh@94 -- # local anon 00:04:35.297 05:23:39 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:35.297 05:23:39 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:35.297 05:23:39 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:35.297 05:23:39 -- setup/common.sh@18 -- # local node= 00:04:35.297 05:23:39 -- setup/common.sh@19 -- # local var val 00:04:35.297 05:23:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.297 05:23:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.297 05:23:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.297 05:23:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.297 05:23:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.297 05:23:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5064260 kB' 'MemAvailable: 9492188 kB' 'Buffers: 35140 kB' 'Cached: 4531800 kB' 'SwapCached: 0 kB' 'Active: 998492 kB' 'Inactive: 3697812 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 139972 kB' 'Active(file): 997440 kB' 'Inactive(file): 3557840 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 158884 kB' 'Mapped: 67484 kB' 'Shmem: 2596 kB' 'KReclaimable: 194164 kB' 'Slab: 257932 kB' 'SReclaimable: 194164 kB' 'SUnreclaim: 63768 kB' 'KernelStack: 4352 kB' 'PageTables: 3552 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 498076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19380 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.297 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.297 05:23:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.298 05:23:39 -- setup/common.sh@33 -- # echo 0 00:04:35.298 05:23:39 -- setup/common.sh@33 -- # return 0 00:04:35.298 05:23:39 -- setup/hugepages.sh@97 -- # anon=0 00:04:35.298 05:23:39 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:35.298 05:23:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.298 05:23:39 -- setup/common.sh@18 -- # local node= 00:04:35.298 05:23:39 -- setup/common.sh@19 -- # local var val 00:04:35.298 05:23:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.298 05:23:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.298 05:23:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.298 05:23:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.298 05:23:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.298 05:23:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5064260 kB' 'MemAvailable: 9492188 kB' 'Buffers: 35140 kB' 'Cached: 4531800 kB' 'SwapCached: 0 kB' 'Active: 998496 kB' 'Inactive: 3698092 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 140252 kB' 'Active(file): 997440 kB' 'Inactive(file): 3557840 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 158908 kB' 'Mapped: 67484 kB' 'Shmem: 2596 kB' 'KReclaimable: 194164 kB' 'Slab: 257932 kB' 'SReclaimable: 194164 kB' 'SUnreclaim: 63768 kB' 'KernelStack: 4352 kB' 'PageTables: 3552 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 498076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19380 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.298 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.298 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.299 05:23:39 -- setup/common.sh@33 -- # echo 0 00:04:35.299 05:23:39 -- setup/common.sh@33 -- # return 0 00:04:35.299 05:23:39 -- setup/hugepages.sh@99 -- # surp=0 00:04:35.299 05:23:39 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:35.299 05:23:39 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:35.299 05:23:39 -- setup/common.sh@18 -- # local node= 00:04:35.299 05:23:39 -- setup/common.sh@19 -- # local var val 00:04:35.299 05:23:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.299 05:23:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.299 05:23:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.299 05:23:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.299 05:23:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.299 05:23:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5064524 kB' 'MemAvailable: 9492452 kB' 'Buffers: 35140 kB' 'Cached: 4531800 kB' 'SwapCached: 0 kB' 'Active: 998500 kB' 'Inactive: 3697804 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 139964 kB' 'Active(file): 997440 kB' 'Inactive(file): 3557840 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 158544 kB' 'Mapped: 67224 kB' 'Shmem: 2596 kB' 'KReclaimable: 194164 kB' 'Slab: 257932 kB' 'SReclaimable: 194164 kB' 'SUnreclaim: 63768 kB' 'KernelStack: 4356 kB' 'PageTables: 3384 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 498076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19380 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.299 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.299 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.300 05:23:39 -- setup/common.sh@33 -- # echo 0 00:04:35.300 05:23:39 -- setup/common.sh@33 -- # return 0 00:04:35.300 nr_hugepages=1024 00:04:35.300 resv_hugepages=0 00:04:35.300 surplus_hugepages=0 00:04:35.300 anon_hugepages=0 00:04:35.300 05:23:39 -- setup/hugepages.sh@100 -- # resv=0 00:04:35.300 05:23:39 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:35.300 05:23:39 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:35.300 05:23:39 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:35.300 05:23:39 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:35.300 05:23:39 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.300 05:23:39 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:35.300 05:23:39 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:35.300 05:23:39 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:35.300 05:23:39 -- setup/common.sh@18 -- # local node= 00:04:35.300 05:23:39 -- setup/common.sh@19 -- # local var val 00:04:35.300 05:23:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.300 05:23:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.300 05:23:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.300 05:23:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.300 05:23:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.300 05:23:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5064788 kB' 'MemAvailable: 9492716 kB' 'Buffers: 35140 kB' 'Cached: 4531800 kB' 'SwapCached: 0 kB' 'Active: 998500 kB' 'Inactive: 3697624 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 139784 kB' 'Active(file): 997440 kB' 'Inactive(file): 3557840 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 158328 kB' 'Mapped: 67224 kB' 'Shmem: 2596 kB' 'KReclaimable: 194164 kB' 'Slab: 257932 kB' 'SReclaimable: 194164 kB' 'SUnreclaim: 63768 kB' 'KernelStack: 4324 kB' 'PageTables: 3560 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 498076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19380 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.300 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.300 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.301 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.301 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.302 05:23:39 -- setup/common.sh@33 -- # echo 1024 00:04:35.302 05:23:39 -- setup/common.sh@33 -- # return 0 00:04:35.302 05:23:39 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.302 05:23:39 -- setup/hugepages.sh@112 -- # get_nodes 00:04:35.302 05:23:39 -- setup/hugepages.sh@27 -- # local node 00:04:35.302 05:23:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.302 05:23:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:35.302 05:23:39 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:35.302 05:23:39 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:35.302 05:23:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.302 05:23:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.302 05:23:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:35.302 05:23:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.302 05:23:39 -- setup/common.sh@18 -- # local node=0 00:04:35.302 05:23:39 -- setup/common.sh@19 -- # local var val 00:04:35.302 05:23:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.302 05:23:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.302 05:23:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:35.302 05:23:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:35.302 05:23:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.302 05:23:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.302 05:23:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5065048 kB' 'MemUsed: 7177924 kB' 'SwapCached: 0 kB' 'Active: 998492 kB' 'Inactive: 3697456 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 139616 kB' 'Active(file): 997440 kB' 'Inactive(file): 3557840 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'FilePages: 4566940 kB' 'Mapped: 67224 kB' 'AnonPages: 158148 kB' 'Shmem: 2596 kB' 'KernelStack: 4408 kB' 'PageTables: 3584 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 194164 kB' 'Slab: 257948 kB' 'SReclaimable: 194164 kB' 'SUnreclaim: 63784 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.302 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.302 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.303 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.303 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.303 05:23:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.303 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.303 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.303 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.303 05:23:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.303 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.303 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.303 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.303 05:23:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.303 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.303 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.303 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.303 05:23:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.303 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.303 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.303 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.303 05:23:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.303 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.303 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.303 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.303 05:23:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.303 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.303 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.303 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.303 05:23:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.303 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.303 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.303 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.303 05:23:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.303 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.303 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.303 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.303 05:23:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.303 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.303 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.303 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.303 05:23:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.303 05:23:39 -- setup/common.sh@33 -- # echo 0 00:04:35.303 05:23:39 -- setup/common.sh@33 -- # return 0 00:04:35.303 node0=1024 expecting 1024 00:04:35.303 05:23:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:35.303 05:23:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:35.303 05:23:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:35.303 05:23:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:35.303 05:23:39 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:35.303 05:23:39 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:35.303 05:23:39 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:35.303 05:23:39 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:35.303 05:23:39 -- setup/hugepages.sh@202 -- # setup output 00:04:35.303 05:23:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.303 05:23:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:35.562 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:35.822 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:35.822 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:35.822 05:23:39 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:35.822 05:23:39 -- setup/hugepages.sh@89 -- # local node 00:04:35.822 05:23:39 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:35.822 05:23:39 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:35.822 05:23:39 -- setup/hugepages.sh@92 -- # local surp 00:04:35.822 05:23:39 -- setup/hugepages.sh@93 -- # local resv 00:04:35.822 05:23:39 -- setup/hugepages.sh@94 -- # local anon 00:04:35.822 05:23:39 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:35.822 05:23:39 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:35.822 05:23:39 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:35.822 05:23:39 -- setup/common.sh@18 -- # local node= 00:04:35.822 05:23:39 -- setup/common.sh@19 -- # local var val 00:04:35.822 05:23:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.822 05:23:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.822 05:23:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.822 05:23:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.822 05:23:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.822 05:23:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.822 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.822 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.822 05:23:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5064096 kB' 'MemAvailable: 9492024 kB' 'Buffers: 35140 kB' 'Cached: 4531800 kB' 'SwapCached: 0 kB' 'Active: 998496 kB' 'Inactive: 3698212 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 140372 kB' 'Active(file): 997440 kB' 'Inactive(file): 3557840 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 159488 kB' 'Mapped: 67208 kB' 'Shmem: 2596 kB' 'KReclaimable: 194164 kB' 'Slab: 257980 kB' 'SReclaimable: 194164 kB' 'SUnreclaim: 63816 kB' 'KernelStack: 4520 kB' 'PageTables: 3876 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 498076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19428 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:35.822 05:23:39 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.822 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.822 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.822 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.822 05:23:39 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.822 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.822 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.822 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.822 05:23:39 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.822 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.822 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.822 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.822 05:23:39 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.822 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.822 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.822 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.822 05:23:39 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.822 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.823 05:23:39 -- setup/common.sh@33 -- # echo 0 00:04:35.823 05:23:39 -- setup/common.sh@33 -- # return 0 00:04:35.823 05:23:39 -- setup/hugepages.sh@97 -- # anon=0 00:04:35.823 05:23:39 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:35.823 05:23:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.823 05:23:39 -- setup/common.sh@18 -- # local node= 00:04:35.823 05:23:39 -- setup/common.sh@19 -- # local var val 00:04:35.823 05:23:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.823 05:23:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.823 05:23:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.823 05:23:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.823 05:23:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.823 05:23:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5064060 kB' 'MemAvailable: 9491988 kB' 'Buffers: 35140 kB' 'Cached: 4531800 kB' 'SwapCached: 0 kB' 'Active: 998496 kB' 'Inactive: 3697896 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 140056 kB' 'Active(file): 997440 kB' 'Inactive(file): 3557840 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 158884 kB' 'Mapped: 67252 kB' 'Shmem: 2596 kB' 'KReclaimable: 194164 kB' 'Slab: 257972 kB' 'SReclaimable: 194164 kB' 'SUnreclaim: 63808 kB' 'KernelStack: 4260 kB' 'PageTables: 3388 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 498076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19444 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.823 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.823 05:23:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.824 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.824 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.825 05:23:39 -- setup/common.sh@33 -- # echo 0 00:04:35.825 05:23:39 -- setup/common.sh@33 -- # return 0 00:04:35.825 05:23:39 -- setup/hugepages.sh@99 -- # surp=0 00:04:35.825 05:23:39 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:35.825 05:23:39 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:35.825 05:23:39 -- setup/common.sh@18 -- # local node= 00:04:35.825 05:23:39 -- setup/common.sh@19 -- # local var val 00:04:35.825 05:23:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.825 05:23:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.825 05:23:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.825 05:23:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.825 05:23:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.825 05:23:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5064060 kB' 'MemAvailable: 9491988 kB' 'Buffers: 35140 kB' 'Cached: 4531800 kB' 'SwapCached: 0 kB' 'Active: 998488 kB' 'Inactive: 3697672 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 139832 kB' 'Active(file): 997440 kB' 'Inactive(file): 3557840 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 158792 kB' 'Mapped: 67292 kB' 'Shmem: 2596 kB' 'KReclaimable: 194164 kB' 'Slab: 257964 kB' 'SReclaimable: 194164 kB' 'SUnreclaim: 63800 kB' 'KernelStack: 4296 kB' 'PageTables: 3296 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 498076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19428 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.825 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.825 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.826 05:23:39 -- setup/common.sh@33 -- # echo 0 00:04:35.826 05:23:39 -- setup/common.sh@33 -- # return 0 00:04:35.826 05:23:39 -- setup/hugepages.sh@100 -- # resv=0 00:04:35.826 05:23:39 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:35.826 nr_hugepages=1024 00:04:35.826 resv_hugepages=0 00:04:35.826 surplus_hugepages=0 00:04:35.826 anon_hugepages=0 00:04:35.826 05:23:39 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:35.826 05:23:39 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:35.826 05:23:39 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:35.826 05:23:39 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.826 05:23:39 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:35.826 05:23:39 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:35.826 05:23:39 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:35.826 05:23:39 -- setup/common.sh@18 -- # local node= 00:04:35.826 05:23:39 -- setup/common.sh@19 -- # local var val 00:04:35.826 05:23:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.826 05:23:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.826 05:23:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.826 05:23:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.826 05:23:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.826 05:23:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5064060 kB' 'MemAvailable: 9491988 kB' 'Buffers: 35140 kB' 'Cached: 4531800 kB' 'SwapCached: 0 kB' 'Active: 998488 kB' 'Inactive: 3697500 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 139660 kB' 'Active(file): 997440 kB' 'Inactive(file): 3557840 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 158356 kB' 'Mapped: 67292 kB' 'Shmem: 2596 kB' 'KReclaimable: 194164 kB' 'Slab: 257964 kB' 'SReclaimable: 194164 kB' 'SUnreclaim: 63800 kB' 'KernelStack: 4300 kB' 'PageTables: 3392 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 498076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19428 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 151404 kB' 'DirectMap2M: 4042752 kB' 'DirectMap1G: 10485760 kB' 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.826 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.826 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # continue 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.827 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.827 05:23:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.827 05:23:39 -- setup/common.sh@33 -- # echo 1024 00:04:35.827 05:23:39 -- setup/common.sh@33 -- # return 0 00:04:35.827 05:23:39 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.827 05:23:39 -- setup/hugepages.sh@112 -- # get_nodes 00:04:35.827 05:23:39 -- setup/hugepages.sh@27 -- # local node 00:04:35.827 05:23:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.827 05:23:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:35.827 05:23:39 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:35.827 05:23:39 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:35.827 05:23:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.827 05:23:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.827 05:23:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:35.827 05:23:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.827 05:23:39 -- setup/common.sh@18 -- # local node=0 00:04:35.827 05:23:39 -- setup/common.sh@19 -- # local var val 00:04:35.827 05:23:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.827 05:23:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.827 05:23:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:35.827 05:23:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:35.827 05:23:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.827 05:23:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.086 05:23:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 5064068 kB' 'MemUsed: 7178904 kB' 'SwapCached: 0 kB' 'Active: 998484 kB' 'Inactive: 3697492 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 139652 kB' 'Active(file): 997440 kB' 'Inactive(file): 3557840 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'FilePages: 4566940 kB' 'Mapped: 67332 kB' 'AnonPages: 158356 kB' 'Shmem: 2596 kB' 'KernelStack: 4316 kB' 'PageTables: 3436 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 194164 kB' 'Slab: 257972 kB' 'SReclaimable: 194164 kB' 'SUnreclaim: 63808 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.086 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.086 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.087 05:23:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.087 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.087 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.087 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.087 05:23:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.087 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.087 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.087 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.087 05:23:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.087 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.087 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.087 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.087 05:23:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.087 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.087 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.087 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.087 05:23:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.087 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.087 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.087 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.087 05:23:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.087 05:23:39 -- setup/common.sh@32 -- # continue 00:04:36.087 05:23:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.087 05:23:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.087 05:23:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.087 05:23:39 -- setup/common.sh@33 -- # echo 0 00:04:36.087 05:23:39 -- setup/common.sh@33 -- # return 0 00:04:36.087 node0=1024 expecting 1024 00:04:36.087 ************************************ 00:04:36.087 END TEST no_shrink_alloc 00:04:36.087 ************************************ 00:04:36.087 05:23:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:36.087 05:23:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:36.087 05:23:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:36.087 05:23:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:36.087 05:23:39 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:36.087 05:23:39 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:36.087 00:04:36.087 real 0m1.764s 00:04:36.087 user 0m0.560s 00:04:36.087 sys 0m1.181s 00:04:36.087 05:23:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.087 05:23:39 -- common/autotest_common.sh@10 -- # set +x 00:04:36.087 05:23:39 -- setup/hugepages.sh@217 -- # clear_hp 00:04:36.087 05:23:39 -- setup/hugepages.sh@37 -- # local node hp 00:04:36.087 05:23:39 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:36.087 05:23:39 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:36.087 05:23:39 -- setup/hugepages.sh@41 -- # echo 0 00:04:36.087 05:23:39 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:36.087 05:23:39 -- setup/hugepages.sh@41 -- # echo 0 00:04:36.087 05:23:39 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:36.087 05:23:39 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:36.087 ************************************ 00:04:36.087 END TEST hugepages 00:04:36.087 ************************************ 00:04:36.087 00:04:36.087 real 0m7.432s 00:04:36.087 user 0m2.437s 00:04:36.087 sys 0m5.111s 00:04:36.087 05:23:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.087 05:23:39 -- common/autotest_common.sh@10 -- # set +x 00:04:36.087 05:23:39 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:36.087 05:23:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:36.087 05:23:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:36.087 05:23:39 -- common/autotest_common.sh@10 -- # set +x 00:04:36.087 ************************************ 00:04:36.087 START TEST driver 00:04:36.087 ************************************ 00:04:36.087 05:23:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:36.087 * Looking for test storage... 00:04:36.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:36.087 05:23:40 -- setup/driver.sh@68 -- # setup reset 00:04:36.087 05:23:40 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:36.087 05:23:40 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:36.652 05:23:40 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:36.652 05:23:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:36.652 05:23:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:36.652 05:23:40 -- common/autotest_common.sh@10 -- # set +x 00:04:36.652 ************************************ 00:04:36.652 START TEST guess_driver 00:04:36.652 ************************************ 00:04:36.652 05:23:40 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:36.652 05:23:40 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:36.652 05:23:40 -- setup/driver.sh@47 -- # local fail=0 00:04:36.652 05:23:40 -- setup/driver.sh@49 -- # pick_driver 00:04:36.652 05:23:40 -- setup/driver.sh@36 -- # vfio 00:04:36.652 05:23:40 -- setup/driver.sh@21 -- # local iommu_grups 00:04:36.652 05:23:40 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:36.652 05:23:40 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:36.652 05:23:40 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:36.652 05:23:40 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:36.652 05:23:40 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:36.652 05:23:40 -- setup/driver.sh@29 -- # [[ N == Y ]] 00:04:36.652 05:23:40 -- setup/driver.sh@32 -- # return 1 00:04:36.652 05:23:40 -- setup/driver.sh@38 -- # uio 00:04:36.652 05:23:40 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:36.652 05:23:40 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:36.652 05:23:40 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:36.652 05:23:40 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:36.652 05:23:40 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio.ko 00:04:36.652 insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:04:36.652 05:23:40 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:36.652 05:23:40 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:36.652 05:23:40 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:36.652 Looking for driver=uio_pci_generic 00:04:36.652 05:23:40 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:36.652 05:23:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.652 05:23:40 -- setup/driver.sh@45 -- # setup output config 00:04:36.652 05:23:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.652 05:23:40 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:36.910 05:23:40 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:36.910 05:23:40 -- setup/driver.sh@58 -- # continue 00:04:36.910 05:23:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.168 05:23:40 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.168 05:23:40 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:37.168 05:23:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:38.546 05:23:42 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:38.546 05:23:42 -- setup/driver.sh@65 -- # setup reset 00:04:38.546 05:23:42 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:38.546 05:23:42 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:39.114 ************************************ 00:04:39.114 END TEST guess_driver 00:04:39.114 ************************************ 00:04:39.114 00:04:39.114 real 0m2.374s 00:04:39.114 user 0m0.446s 00:04:39.114 sys 0m1.914s 00:04:39.114 05:23:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.114 05:23:42 -- common/autotest_common.sh@10 -- # set +x 00:04:39.114 ************************************ 00:04:39.114 END TEST driver 00:04:39.114 ************************************ 00:04:39.114 00:04:39.114 real 0m2.942s 00:04:39.114 user 0m0.705s 00:04:39.114 sys 0m2.239s 00:04:39.114 05:23:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.114 05:23:42 -- common/autotest_common.sh@10 -- # set +x 00:04:39.114 05:23:42 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:39.114 05:23:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:39.114 05:23:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:39.114 05:23:42 -- common/autotest_common.sh@10 -- # set +x 00:04:39.114 ************************************ 00:04:39.114 START TEST devices 00:04:39.114 ************************************ 00:04:39.114 05:23:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:39.114 * Looking for test storage... 00:04:39.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:39.114 05:23:43 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:39.114 05:23:43 -- setup/devices.sh@192 -- # setup reset 00:04:39.114 05:23:43 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:39.114 05:23:43 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:39.682 05:23:43 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:39.682 05:23:43 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:39.682 05:23:43 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:39.682 05:23:43 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:39.682 05:23:43 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:39.682 05:23:43 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:39.682 05:23:43 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:39.682 05:23:43 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:39.682 05:23:43 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:39.682 05:23:43 -- setup/devices.sh@196 -- # blocks=() 00:04:39.682 05:23:43 -- setup/devices.sh@196 -- # declare -a blocks 00:04:39.682 05:23:43 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:39.682 05:23:43 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:39.682 05:23:43 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:39.682 05:23:43 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:39.682 05:23:43 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:39.682 05:23:43 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:39.682 05:23:43 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:39.682 05:23:43 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:39.682 05:23:43 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:39.682 05:23:43 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:39.682 05:23:43 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:39.682 No valid GPT data, bailing 00:04:39.682 05:23:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:39.682 05:23:43 -- scripts/common.sh@393 -- # pt= 00:04:39.682 05:23:43 -- scripts/common.sh@394 -- # return 1 00:04:39.682 05:23:43 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:39.682 05:23:43 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:39.682 05:23:43 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:39.682 05:23:43 -- setup/common.sh@80 -- # echo 5368709120 00:04:39.682 05:23:43 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:39.682 05:23:43 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:39.682 05:23:43 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:39.682 05:23:43 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:39.682 05:23:43 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:39.682 05:23:43 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:39.682 05:23:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:39.682 05:23:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:39.682 05:23:43 -- common/autotest_common.sh@10 -- # set +x 00:04:39.682 ************************************ 00:04:39.682 START TEST nvme_mount 00:04:39.682 ************************************ 00:04:39.682 05:23:43 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:39.682 05:23:43 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:39.682 05:23:43 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:39.682 05:23:43 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:39.682 05:23:43 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:39.682 05:23:43 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:39.682 05:23:43 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:39.682 05:23:43 -- setup/common.sh@40 -- # local part_no=1 00:04:39.682 05:23:43 -- setup/common.sh@41 -- # local size=1073741824 00:04:39.682 05:23:43 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:39.682 05:23:43 -- setup/common.sh@44 -- # parts=() 00:04:39.682 05:23:43 -- setup/common.sh@44 -- # local parts 00:04:39.682 05:23:43 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:39.682 05:23:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.682 05:23:43 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:39.682 05:23:43 -- setup/common.sh@46 -- # (( part++ )) 00:04:39.682 05:23:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.682 05:23:43 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:39.682 05:23:43 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:39.682 05:23:43 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:41.057 Creating new GPT entries in memory. 00:04:41.057 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:41.057 other utilities. 00:04:41.057 05:23:44 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:41.057 05:23:44 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.057 05:23:44 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:41.057 05:23:44 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:41.057 05:23:44 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:41.647 Creating new GPT entries in memory. 00:04:41.647 The operation has completed successfully. 00:04:41.906 05:23:45 -- setup/common.sh@57 -- # (( part++ )) 00:04:41.906 05:23:45 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.906 05:23:45 -- setup/common.sh@62 -- # wait 96626 00:04:41.906 05:23:45 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.906 05:23:45 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:41.906 05:23:45 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.906 05:23:45 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:41.906 05:23:45 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:41.906 05:23:45 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.906 05:23:45 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:41.906 05:23:45 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:41.906 05:23:45 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:41.906 05:23:45 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.906 05:23:45 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:41.906 05:23:45 -- setup/devices.sh@53 -- # local found=0 00:04:41.906 05:23:45 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:41.906 05:23:45 -- setup/devices.sh@56 -- # : 00:04:41.906 05:23:45 -- setup/devices.sh@59 -- # local pci status 00:04:41.906 05:23:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.906 05:23:45 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:41.906 05:23:45 -- setup/devices.sh@47 -- # setup output config 00:04:41.906 05:23:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.906 05:23:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:41.906 05:23:45 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:41.906 05:23:45 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:41.906 05:23:45 -- setup/devices.sh@63 -- # found=1 00:04:41.906 05:23:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.165 05:23:45 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:42.165 05:23:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.165 05:23:45 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:42.165 05:23:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.543 05:23:47 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:43.543 05:23:47 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:43.543 05:23:47 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:43.543 05:23:47 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:43.543 05:23:47 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:43.543 05:23:47 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:43.543 05:23:47 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:43.543 05:23:47 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:43.543 05:23:47 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:43.543 05:23:47 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:43.543 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:43.543 05:23:47 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:43.543 05:23:47 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:43.543 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:43.543 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:43.543 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:43.543 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:43.543 05:23:47 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:43.543 05:23:47 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:43.543 05:23:47 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:43.543 05:23:47 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:43.543 05:23:47 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:43.543 05:23:47 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:43.543 05:23:47 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:43.543 05:23:47 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:43.543 05:23:47 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:43.543 05:23:47 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:43.543 05:23:47 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:43.543 05:23:47 -- setup/devices.sh@53 -- # local found=0 00:04:43.543 05:23:47 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:43.543 05:23:47 -- setup/devices.sh@56 -- # : 00:04:43.543 05:23:47 -- setup/devices.sh@59 -- # local pci status 00:04:43.543 05:23:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.543 05:23:47 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:43.543 05:23:47 -- setup/devices.sh@47 -- # setup output config 00:04:43.543 05:23:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.543 05:23:47 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:43.543 05:23:47 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:43.543 05:23:47 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:43.543 05:23:47 -- setup/devices.sh@63 -- # found=1 00:04:43.543 05:23:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.543 05:23:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:43.543 05:23:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.543 05:23:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:43.543 05:23:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.921 05:23:48 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:44.921 05:23:48 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:44.921 05:23:48 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:44.921 05:23:48 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:44.921 05:23:48 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:44.921 05:23:48 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:44.921 05:23:48 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:04:44.921 05:23:48 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:44.921 05:23:48 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:44.921 05:23:48 -- setup/devices.sh@50 -- # local mount_point= 00:04:44.921 05:23:48 -- setup/devices.sh@51 -- # local test_file= 00:04:44.921 05:23:48 -- setup/devices.sh@53 -- # local found=0 00:04:44.921 05:23:48 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:44.921 05:23:48 -- setup/devices.sh@59 -- # local pci status 00:04:44.921 05:23:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.921 05:23:48 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:44.921 05:23:48 -- setup/devices.sh@47 -- # setup output config 00:04:44.921 05:23:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.921 05:23:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:44.921 05:23:48 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:44.921 05:23:48 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:44.921 05:23:48 -- setup/devices.sh@63 -- # found=1 00:04:44.921 05:23:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.921 05:23:48 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:44.921 05:23:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.921 05:23:48 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:44.921 05:23:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.299 05:23:49 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:46.299 05:23:49 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:46.299 05:23:49 -- setup/devices.sh@68 -- # return 0 00:04:46.299 05:23:49 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:46.299 05:23:49 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:46.299 05:23:49 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:46.299 05:23:49 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:46.299 05:23:49 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:46.299 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:46.299 00:04:46.299 real 0m6.333s 00:04:46.299 user 0m0.717s 00:04:46.299 sys 0m3.635s 00:04:46.299 05:23:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.299 ************************************ 00:04:46.299 END TEST nvme_mount 00:04:46.299 ************************************ 00:04:46.299 05:23:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.299 05:23:49 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:46.299 05:23:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:46.299 05:23:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:46.299 05:23:49 -- common/autotest_common.sh@10 -- # set +x 00:04:46.299 ************************************ 00:04:46.299 START TEST dm_mount 00:04:46.299 ************************************ 00:04:46.299 05:23:49 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:46.299 05:23:49 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:46.299 05:23:49 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:46.299 05:23:49 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:46.299 05:23:49 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:46.299 05:23:49 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:46.299 05:23:49 -- setup/common.sh@40 -- # local part_no=2 00:04:46.299 05:23:49 -- setup/common.sh@41 -- # local size=1073741824 00:04:46.299 05:23:49 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:46.299 05:23:49 -- setup/common.sh@44 -- # parts=() 00:04:46.299 05:23:49 -- setup/common.sh@44 -- # local parts 00:04:46.299 05:23:49 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:46.299 05:23:49 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:46.299 05:23:49 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:46.299 05:23:49 -- setup/common.sh@46 -- # (( part++ )) 00:04:46.299 05:23:49 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:46.299 05:23:49 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:46.299 05:23:49 -- setup/common.sh@46 -- # (( part++ )) 00:04:46.299 05:23:49 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:46.299 05:23:49 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:46.299 05:23:49 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:46.299 05:23:49 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:47.235 Creating new GPT entries in memory. 00:04:47.235 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:47.235 other utilities. 00:04:47.235 05:23:50 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:47.235 05:23:50 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:47.235 05:23:50 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:47.235 05:23:50 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:47.235 05:23:50 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:48.171 Creating new GPT entries in memory. 00:04:48.171 The operation has completed successfully. 00:04:48.171 05:23:51 -- setup/common.sh@57 -- # (( part++ )) 00:04:48.171 05:23:51 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:48.171 05:23:51 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:48.171 05:23:52 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:48.171 05:23:52 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:49.107 The operation has completed successfully. 00:04:49.107 05:23:53 -- setup/common.sh@57 -- # (( part++ )) 00:04:49.107 05:23:53 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:49.107 05:23:53 -- setup/common.sh@62 -- # wait 97107 00:04:49.107 05:23:53 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:49.107 05:23:53 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:49.107 05:23:53 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:49.107 05:23:53 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:49.366 05:23:53 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:49.366 05:23:53 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:49.366 05:23:53 -- setup/devices.sh@161 -- # break 00:04:49.366 05:23:53 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:49.366 05:23:53 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:49.366 05:23:53 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:49.366 05:23:53 -- setup/devices.sh@166 -- # dm=dm-0 00:04:49.366 05:23:53 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:49.366 05:23:53 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:49.366 05:23:53 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:49.366 05:23:53 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:49.366 05:23:53 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:49.366 05:23:53 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:49.366 05:23:53 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:49.366 05:23:53 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:49.366 05:23:53 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:49.366 05:23:53 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:49.366 05:23:53 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:49.366 05:23:53 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:49.366 05:23:53 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:49.366 05:23:53 -- setup/devices.sh@53 -- # local found=0 00:04:49.367 05:23:53 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:49.367 05:23:53 -- setup/devices.sh@56 -- # : 00:04:49.367 05:23:53 -- setup/devices.sh@59 -- # local pci status 00:04:49.367 05:23:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.367 05:23:53 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:49.367 05:23:53 -- setup/devices.sh@47 -- # setup output config 00:04:49.367 05:23:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.367 05:23:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:49.625 05:23:53 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:49.625 05:23:53 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:49.625 05:23:53 -- setup/devices.sh@63 -- # found=1 00:04:49.625 05:23:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.625 05:23:53 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:49.625 05:23:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.625 05:23:53 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:49.625 05:23:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.566 05:23:54 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:50.566 05:23:54 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:50.566 05:23:54 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:50.566 05:23:54 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:50.566 05:23:54 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:50.566 05:23:54 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:50.566 05:23:54 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:50.566 05:23:54 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:50.566 05:23:54 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:50.566 05:23:54 -- setup/devices.sh@50 -- # local mount_point= 00:04:50.566 05:23:54 -- setup/devices.sh@51 -- # local test_file= 00:04:50.566 05:23:54 -- setup/devices.sh@53 -- # local found=0 00:04:50.566 05:23:54 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:50.566 05:23:54 -- setup/devices.sh@59 -- # local pci status 00:04:50.566 05:23:54 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:50.566 05:23:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.566 05:23:54 -- setup/devices.sh@47 -- # setup output config 00:04:50.566 05:23:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.566 05:23:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:50.825 05:23:54 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:50.825 05:23:54 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:50.825 05:23:54 -- setup/devices.sh@63 -- # found=1 00:04:50.825 05:23:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.825 05:23:54 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:50.825 05:23:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.084 05:23:54 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:51.084 05:23:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.021 05:23:55 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:52.021 05:23:55 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:52.021 05:23:55 -- setup/devices.sh@68 -- # return 0 00:04:52.021 05:23:55 -- setup/devices.sh@187 -- # cleanup_dm 00:04:52.021 05:23:55 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:52.021 05:23:55 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:52.021 05:23:55 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:52.021 05:23:55 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.021 05:23:55 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:52.021 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:52.021 05:23:55 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:52.021 05:23:55 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:52.021 00:04:52.021 real 0m5.959s 00:04:52.021 user 0m0.453s 00:04:52.021 sys 0m2.422s 00:04:52.021 05:23:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.021 ************************************ 00:04:52.021 05:23:55 -- common/autotest_common.sh@10 -- # set +x 00:04:52.021 END TEST dm_mount 00:04:52.021 ************************************ 00:04:52.021 05:23:55 -- setup/devices.sh@1 -- # cleanup 00:04:52.021 05:23:55 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:52.021 05:23:55 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.021 05:23:55 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.021 05:23:55 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:52.021 05:23:55 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:52.021 05:23:55 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:52.279 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:52.279 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:52.279 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:52.279 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:52.279 05:23:56 -- setup/devices.sh@12 -- # cleanup_dm 00:04:52.279 05:23:56 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:52.279 05:23:56 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:52.279 05:23:56 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.279 05:23:56 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:52.279 05:23:56 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:52.279 05:23:56 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:52.279 00:04:52.279 real 0m13.084s 00:04:52.279 user 0m1.586s 00:04:52.279 sys 0m6.414s 00:04:52.279 05:23:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.279 ************************************ 00:04:52.279 END TEST devices 00:04:52.279 ************************************ 00:04:52.279 05:23:56 -- common/autotest_common.sh@10 -- # set +x 00:04:52.279 00:04:52.279 real 0m28.758s 00:04:52.279 user 0m6.482s 00:04:52.279 sys 0m17.413s 00:04:52.279 05:23:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.279 05:23:56 -- common/autotest_common.sh@10 -- # set +x 00:04:52.279 ************************************ 00:04:52.279 END TEST setup.sh 00:04:52.279 ************************************ 00:04:52.279 05:23:56 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:52.279 Hugepages 00:04:52.279 node hugesize free / total 00:04:52.279 node0 1048576kB 0 / 0 00:04:52.279 node0 2048kB 2048 / 2048 00:04:52.279 00:04:52.279 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:52.539 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:52.539 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:52.539 05:23:56 -- spdk/autotest.sh@141 -- # uname -s 00:04:52.539 05:23:56 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:52.539 05:23:56 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:52.539 05:23:56 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:53.106 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:53.106 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:54.039 05:23:57 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:54.974 05:23:58 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:54.974 05:23:58 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:54.974 05:23:58 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:04:54.974 05:23:58 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:04:54.974 05:23:58 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:54.974 05:23:58 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:54.974 05:23:58 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:54.974 05:23:58 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:54.974 05:23:58 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:55.234 05:23:58 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:55.234 05:23:58 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:04:55.234 05:23:58 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:55.493 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:55.493 Waiting for block devices as requested 00:04:55.493 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:55.493 05:23:59 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:55.493 05:23:59 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:55.493 05:23:59 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:55.493 05:23:59 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:04:55.751 05:23:59 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:55.751 05:23:59 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:04:55.751 05:23:59 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:55.751 05:23:59 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:55.751 05:23:59 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:04:55.751 05:23:59 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:04:55.751 05:23:59 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:04:55.751 05:23:59 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:55.751 05:23:59 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:55.751 05:23:59 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:04:55.751 05:23:59 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:55.751 05:23:59 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:55.751 05:23:59 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:04:55.751 05:23:59 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:55.752 05:23:59 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:55.752 05:23:59 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:55.752 05:23:59 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:55.752 05:23:59 -- common/autotest_common.sh@1542 -- # continue 00:04:55.752 05:23:59 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:04:55.752 05:23:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:55.752 05:23:59 -- common/autotest_common.sh@10 -- # set +x 00:04:55.752 05:23:59 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:04:55.752 05:23:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:55.752 05:23:59 -- common/autotest_common.sh@10 -- # set +x 00:04:55.752 05:23:59 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:56.011 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:56.269 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:57.659 05:24:01 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:04:57.659 05:24:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:57.659 05:24:01 -- common/autotest_common.sh@10 -- # set +x 00:04:57.659 05:24:01 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:04:57.659 05:24:01 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:57.659 05:24:01 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:57.659 05:24:01 -- common/autotest_common.sh@1562 -- # bdfs=() 00:04:57.659 05:24:01 -- common/autotest_common.sh@1562 -- # local bdfs 00:04:57.659 05:24:01 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:57.659 05:24:01 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:57.659 05:24:01 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:57.659 05:24:01 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:57.659 05:24:01 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:57.660 05:24:01 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:57.660 05:24:01 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:57.660 05:24:01 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:04:57.660 05:24:01 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:57.660 05:24:01 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:57.660 05:24:01 -- common/autotest_common.sh@1565 -- # device=0x0010 00:04:57.660 05:24:01 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:57.660 05:24:01 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:04:57.660 05:24:01 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:57.660 05:24:01 -- common/autotest_common.sh@1578 -- # return 0 00:04:57.660 05:24:01 -- spdk/autotest.sh@161 -- # '[' 1 -eq 1 ']' 00:04:57.660 05:24:01 -- spdk/autotest.sh@162 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:57.660 05:24:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:57.660 05:24:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:57.660 05:24:01 -- common/autotest_common.sh@10 -- # set +x 00:04:57.660 ************************************ 00:04:57.660 START TEST unittest 00:04:57.660 ************************************ 00:04:57.660 05:24:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:57.660 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:57.660 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:04:57.660 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:04:57.660 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:57.660 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:04:57.660 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:57.660 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:04:57.660 ++ rpc_py=rpc_cmd 00:04:57.660 ++ set -e 00:04:57.660 ++ shopt -s nullglob 00:04:57.660 ++ shopt -s extglob 00:04:57.660 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:04:57.660 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:04:57.660 +++ CONFIG_WPDK_DIR= 00:04:57.660 +++ CONFIG_ASAN=y 00:04:57.660 +++ CONFIG_VBDEV_COMPRESS=n 00:04:57.660 +++ CONFIG_HAVE_EXECINFO_H=y 00:04:57.660 +++ CONFIG_USDT=n 00:04:57.660 +++ CONFIG_CUSTOMOCF=n 00:04:57.660 +++ CONFIG_PREFIX=/usr/local 00:04:57.660 +++ CONFIG_RBD=n 00:04:57.660 +++ CONFIG_LIBDIR= 00:04:57.660 +++ CONFIG_IDXD=y 00:04:57.660 +++ CONFIG_NVME_CUSE=y 00:04:57.660 +++ CONFIG_SMA=n 00:04:57.660 +++ CONFIG_VTUNE=n 00:04:57.660 +++ CONFIG_TSAN=n 00:04:57.660 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:04:57.660 +++ CONFIG_VFIO_USER_DIR= 00:04:57.660 +++ CONFIG_PGO_CAPTURE=n 00:04:57.660 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:04:57.660 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:57.660 +++ CONFIG_LTO=n 00:04:57.660 +++ CONFIG_ISCSI_INITIATOR=y 00:04:57.660 +++ CONFIG_CET=n 00:04:57.660 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:04:57.660 +++ CONFIG_OCF_PATH= 00:04:57.660 +++ CONFIG_RDMA_SET_TOS=y 00:04:57.660 +++ CONFIG_HAVE_ARC4RANDOM=n 00:04:57.660 +++ CONFIG_HAVE_LIBARCHIVE=n 00:04:57.660 +++ CONFIG_UBLK=n 00:04:57.660 +++ CONFIG_ISAL_CRYPTO=y 00:04:57.660 +++ CONFIG_OPENSSL_PATH= 00:04:57.660 +++ CONFIG_OCF=n 00:04:57.660 +++ CONFIG_FUSE=n 00:04:57.660 +++ CONFIG_VTUNE_DIR= 00:04:57.660 +++ CONFIG_FUZZER_LIB= 00:04:57.660 +++ CONFIG_FUZZER=n 00:04:57.660 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:04:57.660 +++ CONFIG_CRYPTO=n 00:04:57.660 +++ CONFIG_PGO_USE=n 00:04:57.660 +++ CONFIG_VHOST=y 00:04:57.660 +++ CONFIG_DAOS=n 00:04:57.660 +++ CONFIG_DPDK_INC_DIR= 00:04:57.660 +++ CONFIG_DAOS_DIR= 00:04:57.660 +++ CONFIG_UNIT_TESTS=y 00:04:57.660 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:04:57.660 +++ CONFIG_VIRTIO=y 00:04:57.660 +++ CONFIG_COVERAGE=y 00:04:57.660 +++ CONFIG_RDMA=y 00:04:57.660 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:04:57.660 +++ CONFIG_URING_PATH= 00:04:57.660 +++ CONFIG_XNVME=n 00:04:57.660 +++ CONFIG_VFIO_USER=n 00:04:57.660 +++ CONFIG_ARCH=native 00:04:57.660 +++ CONFIG_URING_ZNS=n 00:04:57.660 +++ CONFIG_WERROR=y 00:04:57.660 +++ CONFIG_HAVE_LIBBSD=n 00:04:57.660 +++ CONFIG_UBSAN=y 00:04:57.660 +++ CONFIG_IPSEC_MB_DIR= 00:04:57.660 +++ CONFIG_GOLANG=n 00:04:57.660 +++ CONFIG_ISAL=y 00:04:57.660 +++ CONFIG_IDXD_KERNEL=n 00:04:57.660 +++ CONFIG_DPDK_LIB_DIR= 00:04:57.660 +++ CONFIG_RDMA_PROV=verbs 00:04:57.660 +++ CONFIG_APPS=y 00:04:57.660 +++ CONFIG_SHARED=n 00:04:57.660 +++ CONFIG_FC_PATH= 00:04:57.660 +++ CONFIG_DPDK_PKG_CONFIG=n 00:04:57.660 +++ CONFIG_FC=n 00:04:57.660 +++ CONFIG_AVAHI=n 00:04:57.660 +++ CONFIG_FIO_PLUGIN=y 00:04:57.660 +++ CONFIG_RAID5F=y 00:04:57.660 +++ CONFIG_EXAMPLES=y 00:04:57.660 +++ CONFIG_TESTS=y 00:04:57.660 +++ CONFIG_CRYPTO_MLX5=n 00:04:57.660 +++ CONFIG_MAX_LCORES= 00:04:57.660 +++ CONFIG_IPSEC_MB=n 00:04:57.660 +++ CONFIG_DEBUG=y 00:04:57.660 +++ CONFIG_DPDK_COMPRESSDEV=n 00:04:57.660 +++ CONFIG_CROSS_PREFIX= 00:04:57.660 +++ CONFIG_URING=n 00:04:57.660 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:04:57.660 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:04:57.660 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:04:57.660 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:04:57.660 +++ _root=/home/vagrant/spdk_repo/spdk 00:04:57.660 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:04:57.660 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:04:57.660 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:04:57.660 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:04:57.660 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:04:57.660 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:04:57.660 +++ VHOST_APP=("$_app_dir/vhost") 00:04:57.660 +++ DD_APP=("$_app_dir/spdk_dd") 00:04:57.660 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:04:57.660 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:04:57.660 +++ [[ #ifndef SPDK_CONFIG_H 00:04:57.660 #define SPDK_CONFIG_H 00:04:57.660 #define SPDK_CONFIG_APPS 1 00:04:57.660 #define SPDK_CONFIG_ARCH native 00:04:57.660 #define SPDK_CONFIG_ASAN 1 00:04:57.660 #undef SPDK_CONFIG_AVAHI 00:04:57.660 #undef SPDK_CONFIG_CET 00:04:57.660 #define SPDK_CONFIG_COVERAGE 1 00:04:57.660 #define SPDK_CONFIG_CROSS_PREFIX 00:04:57.660 #undef SPDK_CONFIG_CRYPTO 00:04:57.660 #undef SPDK_CONFIG_CRYPTO_MLX5 00:04:57.660 #undef SPDK_CONFIG_CUSTOMOCF 00:04:57.660 #undef SPDK_CONFIG_DAOS 00:04:57.660 #define SPDK_CONFIG_DAOS_DIR 00:04:57.660 #define SPDK_CONFIG_DEBUG 1 00:04:57.660 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:04:57.660 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:57.660 #define SPDK_CONFIG_DPDK_INC_DIR 00:04:57.660 #define SPDK_CONFIG_DPDK_LIB_DIR 00:04:57.660 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:04:57.660 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:57.660 #define SPDK_CONFIG_EXAMPLES 1 00:04:57.660 #undef SPDK_CONFIG_FC 00:04:57.660 #define SPDK_CONFIG_FC_PATH 00:04:57.660 #define SPDK_CONFIG_FIO_PLUGIN 1 00:04:57.660 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:04:57.660 #undef SPDK_CONFIG_FUSE 00:04:57.660 #undef SPDK_CONFIG_FUZZER 00:04:57.660 #define SPDK_CONFIG_FUZZER_LIB 00:04:57.660 #undef SPDK_CONFIG_GOLANG 00:04:57.660 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:04:57.660 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:04:57.660 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:04:57.660 #undef SPDK_CONFIG_HAVE_LIBBSD 00:04:57.660 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:04:57.660 #define SPDK_CONFIG_IDXD 1 00:04:57.660 #undef SPDK_CONFIG_IDXD_KERNEL 00:04:57.660 #undef SPDK_CONFIG_IPSEC_MB 00:04:57.660 #define SPDK_CONFIG_IPSEC_MB_DIR 00:04:57.660 #define SPDK_CONFIG_ISAL 1 00:04:57.660 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:04:57.660 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:04:57.660 #define SPDK_CONFIG_LIBDIR 00:04:57.660 #undef SPDK_CONFIG_LTO 00:04:57.660 #define SPDK_CONFIG_MAX_LCORES 00:04:57.660 #define SPDK_CONFIG_NVME_CUSE 1 00:04:57.660 #undef SPDK_CONFIG_OCF 00:04:57.660 #define SPDK_CONFIG_OCF_PATH 00:04:57.660 #define SPDK_CONFIG_OPENSSL_PATH 00:04:57.660 #undef SPDK_CONFIG_PGO_CAPTURE 00:04:57.660 #undef SPDK_CONFIG_PGO_USE 00:04:57.660 #define SPDK_CONFIG_PREFIX /usr/local 00:04:57.660 #define SPDK_CONFIG_RAID5F 1 00:04:57.660 #undef SPDK_CONFIG_RBD 00:04:57.660 #define SPDK_CONFIG_RDMA 1 00:04:57.660 #define SPDK_CONFIG_RDMA_PROV verbs 00:04:57.660 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:04:57.660 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:04:57.660 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:04:57.660 #undef SPDK_CONFIG_SHARED 00:04:57.660 #undef SPDK_CONFIG_SMA 00:04:57.660 #define SPDK_CONFIG_TESTS 1 00:04:57.660 #undef SPDK_CONFIG_TSAN 00:04:57.660 #undef SPDK_CONFIG_UBLK 00:04:57.660 #define SPDK_CONFIG_UBSAN 1 00:04:57.660 #define SPDK_CONFIG_UNIT_TESTS 1 00:04:57.660 #undef SPDK_CONFIG_URING 00:04:57.660 #define SPDK_CONFIG_URING_PATH 00:04:57.660 #undef SPDK_CONFIG_URING_ZNS 00:04:57.660 #undef SPDK_CONFIG_USDT 00:04:57.660 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:04:57.660 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:04:57.660 #undef SPDK_CONFIG_VFIO_USER 00:04:57.660 #define SPDK_CONFIG_VFIO_USER_DIR 00:04:57.660 #define SPDK_CONFIG_VHOST 1 00:04:57.660 #define SPDK_CONFIG_VIRTIO 1 00:04:57.660 #undef SPDK_CONFIG_VTUNE 00:04:57.660 #define SPDK_CONFIG_VTUNE_DIR 00:04:57.660 #define SPDK_CONFIG_WERROR 1 00:04:57.660 #define SPDK_CONFIG_WPDK_DIR 00:04:57.660 #undef SPDK_CONFIG_XNVME 00:04:57.660 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:04:57.660 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:04:57.660 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:57.660 +++ [[ -e /bin/wpdk_common.sh ]] 00:04:57.660 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:57.660 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:57.660 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:57.661 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:57.661 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:57.661 ++++ export PATH 00:04:57.661 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:57.661 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:04:57.661 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:04:57.661 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:04:57.661 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:04:57.661 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:04:57.661 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:04:57.661 +++ TEST_TAG=N/A 00:04:57.661 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:04:57.661 ++ : 1 00:04:57.661 ++ export RUN_NIGHTLY 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_RUN_VALGRIND 00:04:57.661 ++ : 1 00:04:57.661 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:04:57.661 ++ : 1 00:04:57.661 ++ export SPDK_TEST_UNITTEST 00:04:57.661 ++ : 00:04:57.661 ++ export SPDK_TEST_AUTOBUILD 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_RELEASE_BUILD 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_ISAL 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_ISCSI 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_ISCSI_INITIATOR 00:04:57.661 ++ : 1 00:04:57.661 ++ export SPDK_TEST_NVME 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_NVME_PMR 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_NVME_BP 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_NVME_CLI 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_NVME_CUSE 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_NVME_FDP 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_NVMF 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_VFIOUSER 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_VFIOUSER_QEMU 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_FUZZER 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_FUZZER_SHORT 00:04:57.661 ++ : rdma 00:04:57.661 ++ export SPDK_TEST_NVMF_TRANSPORT 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_RBD 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_VHOST 00:04:57.661 ++ : 1 00:04:57.661 ++ export SPDK_TEST_BLOCKDEV 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_IOAT 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_BLOBFS 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_VHOST_INIT 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_LVOL 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_VBDEV_COMPRESS 00:04:57.661 ++ : 1 00:04:57.661 ++ export SPDK_RUN_ASAN 00:04:57.661 ++ : 1 00:04:57.661 ++ export SPDK_RUN_UBSAN 00:04:57.661 ++ : 00:04:57.661 ++ export SPDK_RUN_EXTERNAL_DPDK 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_RUN_NON_ROOT 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_CRYPTO 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_FTL 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_OCF 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_VMD 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_OPAL 00:04:57.661 ++ : 00:04:57.661 ++ export SPDK_TEST_NATIVE_DPDK 00:04:57.661 ++ : true 00:04:57.661 ++ export SPDK_AUTOTEST_X 00:04:57.661 ++ : 1 00:04:57.661 ++ export SPDK_TEST_RAID5 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_URING 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_USDT 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_USE_IGB_UIO 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_SCHEDULER 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_SCANBUILD 00:04:57.661 ++ : 00:04:57.661 ++ export SPDK_TEST_NVMF_NICS 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_SMA 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_DAOS 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_XNVME 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_ACCEL_DSA 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_ACCEL_IAA 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_ACCEL_IOAT 00:04:57.661 ++ : 00:04:57.661 ++ export SPDK_TEST_FUZZER_TARGET 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_TEST_NVMF_MDNS 00:04:57.661 ++ : 0 00:04:57.661 ++ export SPDK_JSONRPC_GO_CLIENT 00:04:57.661 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:04:57.661 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:04:57.661 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:04:57.661 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:04:57.661 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:57.661 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:57.661 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:57.661 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:57.661 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:04:57.661 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:04:57.661 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:04:57.661 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:04:57.661 ++ export PYTHONDONTWRITEBYTECODE=1 00:04:57.661 ++ PYTHONDONTWRITEBYTECODE=1 00:04:57.661 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:04:57.661 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:04:57.661 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:04:57.661 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:04:57.661 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:04:57.661 ++ rm -rf /var/tmp/asan_suppression_file 00:04:57.661 ++ cat 00:04:57.661 ++ echo leak:libfuse3.so 00:04:57.661 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:04:57.661 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:04:57.661 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:04:57.661 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:04:57.661 ++ '[' -z /var/spdk/dependencies ']' 00:04:57.661 ++ export DEPENDENCY_DIR 00:04:57.661 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:04:57.661 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:04:57.661 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:04:57.661 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:04:57.661 ++ export QEMU_BIN= 00:04:57.661 ++ QEMU_BIN= 00:04:57.661 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:57.661 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:57.661 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:04:57.661 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:04:57.661 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:57.661 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:57.661 ++ '[' 0 -eq 0 ']' 00:04:57.661 ++ export valgrind= 00:04:57.661 ++ valgrind= 00:04:57.661 +++ uname -s 00:04:57.661 ++ '[' Linux = Linux ']' 00:04:57.661 ++ HUGEMEM=4096 00:04:57.661 ++ export CLEAR_HUGE=yes 00:04:57.661 ++ CLEAR_HUGE=yes 00:04:57.661 ++ [[ 0 -eq 1 ]] 00:04:57.661 ++ [[ 0 -eq 1 ]] 00:04:57.661 ++ MAKE=make 00:04:57.661 +++ nproc 00:04:57.661 ++ MAKEFLAGS=-j10 00:04:57.661 ++ export HUGEMEM=4096 00:04:57.661 ++ HUGEMEM=4096 00:04:57.661 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:04:57.661 ++ NO_HUGE=() 00:04:57.661 ++ TEST_MODE= 00:04:57.661 ++ [[ -z '' ]] 00:04:57.661 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:04:57.661 ++ exec 00:04:57.661 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:04:57.661 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:04:57.661 ++ set_test_storage 2147483648 00:04:57.661 ++ [[ -v testdir ]] 00:04:57.661 ++ local requested_size=2147483648 00:04:57.661 ++ local mount target_dir 00:04:57.661 ++ local -A mounts fss sizes avails uses 00:04:57.661 ++ local source fs size avail mount use 00:04:57.661 ++ local storage_fallback storage_candidates 00:04:57.661 +++ mktemp -udt spdk.XXXXXX 00:04:57.661 ++ storage_fallback=/tmp/spdk.UYIy5S 00:04:57.661 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:04:57.661 ++ [[ -n '' ]] 00:04:57.661 ++ [[ -n '' ]] 00:04:57.661 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.UYIy5S/tests/unit /tmp/spdk.UYIy5S 00:04:57.661 ++ requested_size=2214592512 00:04:57.661 ++ read -r source fs size use avail _ mount 00:04:57.661 +++ df -T 00:04:57.661 +++ grep -v Filesystem 00:04:57.661 ++ mounts["$mount"]=tmpfs 00:04:57.661 ++ fss["$mount"]=tmpfs 00:04:57.661 ++ avails["$mount"]=1252601856 00:04:57.661 ++ sizes["$mount"]=1253683200 00:04:57.661 ++ uses["$mount"]=1081344 00:04:57.661 ++ read -r source fs size use avail _ mount 00:04:57.661 ++ mounts["$mount"]=/dev/vda1 00:04:57.661 ++ fss["$mount"]=ext4 00:04:57.661 ++ avails["$mount"]=10467303424 00:04:57.661 ++ sizes["$mount"]=20616794112 00:04:57.661 ++ uses["$mount"]=10132713472 00:04:57.662 ++ read -r source fs size use avail _ mount 00:04:57.662 ++ mounts["$mount"]=tmpfs 00:04:57.662 ++ fss["$mount"]=tmpfs 00:04:57.662 ++ avails["$mount"]=6268399616 00:04:57.662 ++ sizes["$mount"]=6268399616 00:04:57.662 ++ uses["$mount"]=0 00:04:57.662 ++ read -r source fs size use avail _ mount 00:04:57.662 ++ mounts["$mount"]=tmpfs 00:04:57.662 ++ fss["$mount"]=tmpfs 00:04:57.662 ++ avails["$mount"]=5242880 00:04:57.662 ++ sizes["$mount"]=5242880 00:04:57.662 ++ uses["$mount"]=0 00:04:57.662 ++ read -r source fs size use avail _ mount 00:04:57.662 ++ mounts["$mount"]=/dev/vda15 00:04:57.662 ++ fss["$mount"]=vfat 00:04:57.662 ++ avails["$mount"]=103061504 00:04:57.662 ++ sizes["$mount"]=109395968 00:04:57.662 ++ uses["$mount"]=6334464 00:04:57.662 ++ read -r source fs size use avail _ mount 00:04:57.662 ++ mounts["$mount"]=tmpfs 00:04:57.662 ++ fss["$mount"]=tmpfs 00:04:57.662 ++ avails["$mount"]=1253675008 00:04:57.662 ++ sizes["$mount"]=1253679104 00:04:57.662 ++ uses["$mount"]=4096 00:04:57.662 ++ read -r source fs size use avail _ mount 00:04:57.662 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:04:57.662 ++ fss["$mount"]=fuse.sshfs 00:04:57.662 ++ avails["$mount"]=98001653760 00:04:57.662 ++ sizes["$mount"]=105088212992 00:04:57.662 ++ uses["$mount"]=1701126144 00:04:57.662 ++ read -r source fs size use avail _ mount 00:04:57.662 ++ printf '* Looking for test storage...\n' 00:04:57.662 * Looking for test storage... 00:04:57.662 ++ local target_space new_size 00:04:57.662 ++ for target_dir in "${storage_candidates[@]}" 00:04:57.662 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:04:57.662 +++ awk '$1 !~ /Filesystem/{print $6}' 00:04:57.662 ++ mount=/ 00:04:57.662 ++ target_space=10467303424 00:04:57.662 ++ (( target_space == 0 || target_space < requested_size )) 00:04:57.662 ++ (( target_space >= requested_size )) 00:04:57.662 ++ [[ ext4 == tmpfs ]] 00:04:57.662 ++ [[ ext4 == ramfs ]] 00:04:57.662 ++ [[ / == / ]] 00:04:57.662 ++ new_size=12347305984 00:04:57.662 ++ (( new_size * 100 / sizes[/] > 95 )) 00:04:57.662 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:04:57.662 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:04:57.662 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:04:57.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:04:57.662 ++ return 0 00:04:57.662 ++ set -o errtrace 00:04:57.662 ++ shopt -s extdebug 00:04:57.662 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:04:57.662 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:04:57.662 05:24:01 -- common/autotest_common.sh@1672 -- # true 00:04:57.662 05:24:01 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:04:57.662 05:24:01 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:04:57.662 05:24:01 -- common/autotest_common.sh@29 -- # exec 00:04:57.662 05:24:01 -- common/autotest_common.sh@31 -- # xtrace_restore 00:04:57.662 05:24:01 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:04:57.662 05:24:01 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:04:57.662 05:24:01 -- common/autotest_common.sh@18 -- # set -x 00:04:57.662 05:24:01 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:04:57.662 05:24:01 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:04:57.662 05:24:01 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:04:57.662 05:24:01 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:04:57.662 05:24:01 -- unit/unittest.sh@178 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:04:57.662 05:24:01 -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=gcc 00:04:57.662 05:24:01 -- unit/unittest.sh@179 -- # hash lcov 00:04:57.662 05:24:01 -- unit/unittest.sh@179 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:57.662 05:24:01 -- unit/unittest.sh@179 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:57.662 05:24:01 -- unit/unittest.sh@180 -- # cov_avail=yes 00:04:57.662 05:24:01 -- unit/unittest.sh@184 -- # '[' yes = yes ']' 00:04:57.662 05:24:01 -- unit/unittest.sh@186 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:04:57.662 05:24:01 -- unit/unittest.sh@189 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:04:57.662 05:24:01 -- unit/unittest.sh@191 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:04:57.662 05:24:01 -- unit/unittest.sh@199 -- # export 'LCOV_OPTS= 00:04:57.662 --rc lcov_branch_coverage=1 00:04:57.662 --rc lcov_function_coverage=1 00:04:57.662 --rc genhtml_branch_coverage=1 00:04:57.662 --rc genhtml_function_coverage=1 00:04:57.662 --rc genhtml_legend=1 00:04:57.662 --rc geninfo_all_blocks=1 00:04:57.662 ' 00:04:57.662 05:24:01 -- unit/unittest.sh@199 -- # LCOV_OPTS=' 00:04:57.662 --rc lcov_branch_coverage=1 00:04:57.662 --rc lcov_function_coverage=1 00:04:57.662 --rc genhtml_branch_coverage=1 00:04:57.662 --rc genhtml_function_coverage=1 00:04:57.662 --rc genhtml_legend=1 00:04:57.662 --rc geninfo_all_blocks=1 00:04:57.662 ' 00:04:57.662 05:24:01 -- unit/unittest.sh@200 -- # export 'LCOV=lcov 00:04:57.662 --rc lcov_branch_coverage=1 00:04:57.662 --rc lcov_function_coverage=1 00:04:57.662 --rc genhtml_branch_coverage=1 00:04:57.662 --rc genhtml_function_coverage=1 00:04:57.662 --rc genhtml_legend=1 00:04:57.662 --rc geninfo_all_blocks=1 00:04:57.662 --no-external' 00:04:57.662 05:24:01 -- unit/unittest.sh@200 -- # LCOV='lcov 00:04:57.662 --rc lcov_branch_coverage=1 00:04:57.662 --rc lcov_function_coverage=1 00:04:57.662 --rc genhtml_branch_coverage=1 00:04:57.662 --rc genhtml_function_coverage=1 00:04:57.662 --rc genhtml_legend=1 00:04:57.662 --rc geninfo_all_blocks=1 00:04:57.662 --no-external' 00:04:57.662 05:24:01 -- unit/unittest.sh@202 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:05:12.590 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:05:12.590 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:05:12.590 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:05:12.590 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:05:12.590 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:05:12.590 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:39.137 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:39.137 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:39.138 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:39.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:39.139 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:39.139 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:39.139 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:39.139 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:39.139 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:39.139 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:39.139 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:39.139 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:39.139 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:39.139 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:39.139 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:39.139 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:39.139 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:39.139 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:39.139 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:39.139 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:39.139 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:39.139 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:39.139 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:39.139 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:39.139 05:24:41 -- unit/unittest.sh@206 -- # uname -m 00:05:39.139 05:24:41 -- unit/unittest.sh@206 -- # '[' x86_64 = aarch64 ']' 00:05:39.139 05:24:41 -- unit/unittest.sh@210 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:39.139 05:24:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.139 05:24:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.139 05:24:41 -- common/autotest_common.sh@10 -- # set +x 00:05:39.139 ************************************ 00:05:39.139 START TEST unittest_pci_event 00:05:39.139 ************************************ 00:05:39.139 05:24:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:39.139 00:05:39.139 00:05:39.139 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.139 http://cunit.sourceforge.net/ 00:05:39.139 00:05:39.139 00:05:39.139 Suite: pci_event 00:05:39.139 Test: test_pci_parse_event ...[2024-10-07 05:24:41.463963] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:05:39.139 passed 00:05:39.139 00:05:39.139 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.139 suites 1 1 n/a 0 0 00:05:39.139 tests 1 1 1 0 0 00:05:39.139 asserts 15 15 15 0 n/a 00:05:39.139 00:05:39.139 Elapsed time = 0.001 seconds[2024-10-07 05:24:41.464808] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:05:39.139 00:05:39.139 00:05:39.139 real 0m0.036s 00:05:39.139 user 0m0.008s 00:05:39.139 sys 0m0.026s 00:05:39.139 05:24:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.139 ************************************ 00:05:39.139 END TEST unittest_pci_event 00:05:39.139 ************************************ 00:05:39.139 05:24:41 -- common/autotest_common.sh@10 -- # set +x 00:05:39.139 05:24:41 -- unit/unittest.sh@211 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:39.139 05:24:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.139 05:24:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.139 05:24:41 -- common/autotest_common.sh@10 -- # set +x 00:05:39.139 ************************************ 00:05:39.139 START TEST unittest_include 00:05:39.139 ************************************ 00:05:39.139 05:24:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:39.139 00:05:39.139 00:05:39.139 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.139 http://cunit.sourceforge.net/ 00:05:39.139 00:05:39.139 00:05:39.139 Suite: histogram 00:05:39.139 Test: histogram_test ...passed 00:05:39.139 Test: histogram_merge ...passed 00:05:39.139 00:05:39.139 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.139 suites 1 1 n/a 0 0 00:05:39.139 tests 2 2 2 0 0 00:05:39.139 asserts 50 50 50 0 n/a 00:05:39.139 00:05:39.139 Elapsed time = 0.007 seconds 00:05:39.139 00:05:39.139 real 0m0.034s 00:05:39.139 user 0m0.026s 00:05:39.139 sys 0m0.009s 00:05:39.139 05:24:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.139 05:24:41 -- common/autotest_common.sh@10 -- # set +x 00:05:39.139 ************************************ 00:05:39.139 END TEST unittest_include 00:05:39.139 ************************************ 00:05:39.139 05:24:41 -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:05:39.139 05:24:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.139 05:24:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.139 05:24:41 -- common/autotest_common.sh@10 -- # set +x 00:05:39.139 ************************************ 00:05:39.139 START TEST unittest_bdev 00:05:39.139 ************************************ 00:05:39.139 05:24:41 -- common/autotest_common.sh@1104 -- # unittest_bdev 00:05:39.139 05:24:41 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:05:39.139 00:05:39.139 00:05:39.139 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.139 http://cunit.sourceforge.net/ 00:05:39.139 00:05:39.139 00:05:39.139 Suite: bdev 00:05:39.139 Test: bytes_to_blocks_test ...passed 00:05:39.139 Test: num_blocks_test ...passed 00:05:39.139 Test: io_valid_test ...passed 00:05:39.139 Test: open_write_test ...[2024-10-07 05:24:41.715793] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:05:39.139 [2024-10-07 05:24:41.716096] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:05:39.139 [2024-10-07 05:24:41.716246] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:05:39.139 passed 00:05:39.139 Test: claim_test ...passed 00:05:39.139 Test: alias_add_del_test ...[2024-10-07 05:24:41.804057] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:05:39.139 [2024-10-07 05:24:41.804192] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4583:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:05:39.139 [2024-10-07 05:24:41.804247] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:05:39.139 passed 00:05:39.139 Test: get_device_stat_test ...passed 00:05:39.139 Test: bdev_io_types_test ...passed 00:05:39.139 Test: bdev_io_wait_test ...passed 00:05:39.139 Test: bdev_io_spans_split_test ...passed 00:05:39.139 Test: bdev_io_boundary_split_test ...passed 00:05:39.139 Test: bdev_io_max_size_and_segment_split_test ...[2024-10-07 05:24:41.982955] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:05:39.139 passed 00:05:39.139 Test: bdev_io_mix_split_test ...passed 00:05:39.139 Test: bdev_io_split_with_io_wait ...passed 00:05:39.139 Test: bdev_io_write_unit_split_test ...[2024-10-07 05:24:42.073196] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:39.140 [2024-10-07 05:24:42.073279] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:39.140 [2024-10-07 05:24:42.073308] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:05:39.140 [2024-10-07 05:24:42.073351] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:05:39.140 passed 00:05:39.140 Test: bdev_io_alignment_with_boundary ...passed 00:05:39.140 Test: bdev_io_alignment ...passed 00:05:39.140 Test: bdev_histograms ...passed 00:05:39.140 Test: bdev_write_zeroes ...passed 00:05:39.140 Test: bdev_compare_and_write ...passed 00:05:39.140 Test: bdev_compare ...passed 00:05:39.140 Test: bdev_compare_emulated ...passed 00:05:39.140 Test: bdev_zcopy_write ...passed 00:05:39.140 Test: bdev_zcopy_read ...passed 00:05:39.140 Test: bdev_open_while_hotremove ...passed 00:05:39.140 Test: bdev_close_while_hotremove ...passed 00:05:39.140 Test: bdev_open_ext_test ...[2024-10-07 05:24:42.405647] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:39.140 passed 00:05:39.140 Test: bdev_open_ext_unregister ...passed 00:05:39.140 Test: bdev_set_io_timeout ...[2024-10-07 05:24:42.405821] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:39.140 passed 00:05:39.140 Test: bdev_set_qd_sampling ...passed 00:05:39.140 Test: lba_range_overlap ...passed 00:05:39.140 Test: lock_lba_range_check_ranges ...passed 00:05:39.140 Test: lock_lba_range_with_io_outstanding ...passed 00:05:39.140 Test: lock_lba_range_overlapped ...passed 00:05:39.140 Test: bdev_quiesce ...[2024-10-07 05:24:42.560041] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9969:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:05:39.140 passed 00:05:39.140 Test: bdev_io_abort ...passed 00:05:39.140 Test: bdev_unmap ...passed 00:05:39.140 Test: bdev_write_zeroes_split_test ...passed 00:05:39.140 Test: bdev_set_options_test ...[2024-10-07 05:24:42.658469] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:05:39.140 passed 00:05:39.140 Test: bdev_get_memory_domains ...passed 00:05:39.140 Test: bdev_io_ext ...passed 00:05:39.140 Test: bdev_io_ext_no_opts ...passed 00:05:39.140 Test: bdev_io_ext_invalid_opts ...passed 00:05:39.140 Test: bdev_io_ext_split ...passed 00:05:39.140 Test: bdev_io_ext_bounce_buffer ...passed 00:05:39.140 Test: bdev_register_uuid_alias ...[2024-10-07 05:24:42.811770] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name a5ad4b54-1a39-470d-a98a-ac9bdaec67dc already exists 00:05:39.140 [2024-10-07 05:24:42.811823] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:a5ad4b54-1a39-470d-a98a-ac9bdaec67dc alias for bdev bdev0 00:05:39.140 passed 00:05:39.140 Test: bdev_unregister_by_name ...[2024-10-07 05:24:42.825984] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7836:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:05:39.140 passed 00:05:39.140 Test: for_each_bdev_test ...[2024-10-07 05:24:42.826045] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7844:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:05:39.140 passed 00:05:39.140 Test: bdev_seek_test ...passed 00:05:39.140 Test: bdev_copy ...passed 00:05:39.140 Test: bdev_copy_split_test ...passed 00:05:39.140 Test: examine_locks ...passed 00:05:39.140 Test: claim_v2_rwo ...[2024-10-07 05:24:42.914206] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:39.140 passed 00:05:39.140 Test: claim_v2_rom ...[2024-10-07 05:24:42.914264] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8570:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:39.140 [2024-10-07 05:24:42.914290] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:39.140 [2024-10-07 05:24:42.914337] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:39.140 [2024-10-07 05:24:42.914353] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:39.140 [2024-10-07 05:24:42.914394] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:05:39.140 [2024-10-07 05:24:42.914518] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:39.140 [2024-10-07 05:24:42.914590] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:39.140 passed 00:05:39.140 Test: claim_v2_rwm ...[2024-10-07 05:24:42.914616] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:39.140 [2024-10-07 05:24:42.914636] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:39.140 [2024-10-07 05:24:42.914681] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8608:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:05:39.140 [2024-10-07 05:24:42.914713] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:39.140 [2024-10-07 05:24:42.914806] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:39.140 [2024-10-07 05:24:42.914850] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:39.140 [2024-10-07 05:24:42.914878] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:39.140 [2024-10-07 05:24:42.914900] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:39.140 [2024-10-07 05:24:42.914929] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:39.140 [2024-10-07 05:24:42.914955] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8658:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:05:39.140 [2024-10-07 05:24:42.914990] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:39.140 passed 00:05:39.140 Test: claim_v2_existing_writer ...passed 00:05:39.140 Test: claim_v2_existing_v1 ...[2024-10-07 05:24:42.915093] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:39.140 [2024-10-07 05:24:42.915120] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:39.140 [2024-10-07 05:24:42.915218] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:39.140 [2024-10-07 05:24:42.915245] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:39.140 [2024-10-07 05:24:42.915261] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:39.140 passed 00:05:39.140 Test: claim_v1_existing_v2 ...[2024-10-07 05:24:42.915351] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:39.140 passed 00:05:39.140 Test: examine_claimed ...[2024-10-07 05:24:42.915394] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:39.140 [2024-10-07 05:24:42.915429] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:39.140 [2024-10-07 05:24:42.915656] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:05:39.140 passed 00:05:39.140 00:05:39.140 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.140 suites 1 1 n/a 0 0 00:05:39.140 tests 59 59 59 0 0 00:05:39.140 asserts 4599 4599 4599 0 n/a 00:05:39.140 00:05:39.140 Elapsed time = 1.271 seconds 00:05:39.140 05:24:42 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:05:39.140 00:05:39.140 00:05:39.140 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.141 http://cunit.sourceforge.net/ 00:05:39.141 00:05:39.141 00:05:39.141 Suite: nvme 00:05:39.141 Test: test_create_ctrlr ...passed 00:05:39.141 Test: test_reset_ctrlr ...[2024-10-07 05:24:42.963077] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:39.141 passed 00:05:39.141 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:05:39.141 Test: test_failover_ctrlr ...passed 00:05:39.141 Test: test_race_between_failover_and_add_secondary_trid ...[2024-10-07 05:24:42.965859] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:39.141 [2024-10-07 05:24:42.966112] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:39.141 [2024-10-07 05:24:42.966334] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:39.141 passed 00:05:39.141 Test: test_pending_reset ...[2024-10-07 05:24:42.967778] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:39.141 [2024-10-07 05:24:42.968079] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:39.141 passed 00:05:39.141 Test: test_attach_ctrlr ...[2024-10-07 05:24:42.969335] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:05:39.141 passed 00:05:39.141 Test: test_aer_cb ...passed 00:05:39.141 Test: test_submit_nvme_cmd ...passed 00:05:39.141 Test: test_add_remove_trid ...passed 00:05:39.141 Test: test_abort ...[2024-10-07 05:24:42.972785] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7227:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:05:39.141 passed 00:05:39.141 Test: test_get_io_qpair ...passed 00:05:39.141 Test: test_bdev_unregister ...passed 00:05:39.141 Test: test_compare_ns ...passed 00:05:39.141 Test: test_init_ana_log_page ...passed 00:05:39.141 Test: test_get_memory_domains ...passed 00:05:39.141 Test: test_reconnect_qpair ...[2024-10-07 05:24:42.975586] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:39.141 passed 00:05:39.141 Test: test_create_bdev_ctrlr ...[2024-10-07 05:24:42.976216] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5279:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:05:39.141 passed 00:05:39.141 Test: test_add_multi_ns_to_bdev ...[2024-10-07 05:24:42.977671] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4492:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:05:39.141 passed 00:05:39.141 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:05:39.141 Test: test_admin_path ...passed 00:05:39.141 Test: test_reset_bdev_ctrlr ...passed 00:05:39.141 Test: test_find_io_path ...passed 00:05:39.141 Test: test_retry_io_if_ana_state_is_updating ...passed 00:05:39.141 Test: test_retry_io_for_io_path_error ...passed 00:05:39.141 Test: test_retry_io_count ...passed 00:05:39.141 Test: test_concurrent_read_ana_log_page ...passed 00:05:39.141 Test: test_retry_io_for_ana_error ...passed 00:05:39.141 Test: test_check_io_error_resiliency_params ...[2024-10-07 05:24:42.985003] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5932:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:05:39.141 [2024-10-07 05:24:42.985085] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5936:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:39.141 [2024-10-07 05:24:42.985137] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5945:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:39.141 [2024-10-07 05:24:42.985168] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5948:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:05:39.141 [2024-10-07 05:24:42.985212] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:39.141 [2024-10-07 05:24:42.985248] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:39.141 [2024-10-07 05:24:42.985289] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5940:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:05:39.141 [2024-10-07 05:24:42.985360] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5955:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:05:39.141 [2024-10-07 05:24:42.985392] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5952:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:05:39.141 passed 00:05:39.141 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:05:39.141 Test: test_reconnect_ctrlr ...[2024-10-07 05:24:42.986266] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:39.141 [2024-10-07 05:24:42.986454] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:39.141 [2024-10-07 05:24:42.986740] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:39.141 [2024-10-07 05:24:42.986899] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:39.141 [2024-10-07 05:24:42.987053] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:39.141 passed 00:05:39.141 Test: test_retry_failover_ctrlr ...[2024-10-07 05:24:42.987428] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:39.141 passed 00:05:39.141 Test: test_fail_path ...[2024-10-07 05:24:42.988068] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:39.141 [2024-10-07 05:24:42.988236] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:39.141 [2024-10-07 05:24:42.988379] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:39.141 [2024-10-07 05:24:42.988497] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:39.141 [2024-10-07 05:24:42.988670] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:39.141 passed 00:05:39.141 Test: test_nvme_ns_cmp ...passed 00:05:39.141 Test: test_ana_transition ...passed 00:05:39.141 Test: test_set_preferred_path ...passed 00:05:39.141 Test: test_find_next_io_path ...passed 00:05:39.141 Test: test_find_io_path_min_qd ...passed 00:05:39.141 Test: test_disable_auto_failback ...[2024-10-07 05:24:42.990409] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:39.141 passed 00:05:39.141 Test: test_set_multipath_policy ...passed 00:05:39.141 Test: test_uuid_generation ...passed 00:05:39.141 Test: test_retry_io_to_same_path ...passed 00:05:39.141 Test: test_race_between_reset_and_disconnected ...passed 00:05:39.141 Test: test_ctrlr_op_rpc ...passed 00:05:39.141 Test: test_bdev_ctrlr_op_rpc ...passed 00:05:39.141 Test: test_disable_enable_ctrlr ...[2024-10-07 05:24:42.994222] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:39.141 [2024-10-07 05:24:42.994387] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:39.141 passed 00:05:39.141 Test: test_delete_ctrlr_done ...passed 00:05:39.141 Test: test_ns_remove_during_reset ...passed 00:05:39.141 00:05:39.141 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.141 suites 1 1 n/a 0 0 00:05:39.141 tests 48 48 48 0 0 00:05:39.141 asserts 3553 3553 3553 0 n/a 00:05:39.141 00:05:39.141 Elapsed time = 0.034 seconds 00:05:39.141 05:24:43 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:05:39.141 Test Options 00:05:39.142 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:05:39.142 00:05:39.142 00:05:39.142 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.142 http://cunit.sourceforge.net/ 00:05:39.142 00:05:39.142 00:05:39.142 Suite: raid 00:05:39.142 Test: test_create_raid ...passed 00:05:39.142 Test: test_create_raid_superblock ...passed 00:05:39.142 Test: test_delete_raid ...passed 00:05:39.142 Test: test_create_raid_invalid_args ...[2024-10-07 05:24:43.028042] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:05:39.142 [2024-10-07 05:24:43.028314] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:05:39.142 [2024-10-07 05:24:43.028638] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:05:39.142 [2024-10-07 05:24:43.028803] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:39.142 [2024-10-07 05:24:43.029351] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:39.142 passed 00:05:39.142 Test: test_delete_raid_invalid_args ...passed 00:05:39.142 Test: test_io_channel ...passed 00:05:39.142 Test: test_reset_io ...passed 00:05:39.142 Test: test_write_io ...passed 00:05:39.142 Test: test_read_io ...passed 00:05:40.081 Test: test_unmap_io ...passed 00:05:40.081 Test: test_io_failure ...[2024-10-07 05:24:43.709965] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:05:40.081 passed 00:05:40.081 Test: test_multi_raid_no_io ...passed 00:05:40.081 Test: test_multi_raid_with_io ...passed 00:05:40.081 Test: test_io_type_supported ...passed 00:05:40.081 Test: test_raid_json_dump_info ...passed 00:05:40.081 Test: test_context_size ...passed 00:05:40.081 Test: test_raid_level_conversions ...passed 00:05:40.081 Test: test_raid_process ...passed 00:05:40.081 Test: test_raid_io_split ...passed 00:05:40.081 00:05:40.081 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.081 suites 1 1 n/a 0 0 00:05:40.081 tests 19 19 19 0 0 00:05:40.081 asserts 177879 177879 177879 0 n/a 00:05:40.081 00:05:40.081 Elapsed time = 0.691 seconds 00:05:40.081 05:24:43 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:05:40.081 00:05:40.081 00:05:40.081 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.081 http://cunit.sourceforge.net/ 00:05:40.081 00:05:40.081 00:05:40.081 Suite: raid_sb 00:05:40.081 Test: test_raid_bdev_write_superblock ...passed 00:05:40.081 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:05:40.081 Test: test_raid_bdev_parse_superblock ...[2024-10-07 05:24:43.755473] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 120:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:05:40.081 passed 00:05:40.081 00:05:40.081 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.081 suites 1 1 n/a 0 0 00:05:40.081 tests 3 3 3 0 0 00:05:40.081 asserts 32 32 32 0 n/a 00:05:40.081 00:05:40.081 Elapsed time = 0.001 seconds 00:05:40.081 05:24:43 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:05:40.081 00:05:40.081 00:05:40.081 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.081 http://cunit.sourceforge.net/ 00:05:40.081 00:05:40.081 00:05:40.081 Suite: concat 00:05:40.081 Test: test_concat_start ...passed 00:05:40.081 Test: test_concat_rw ...passed 00:05:40.081 Test: test_concat_null_payload ...passed 00:05:40.081 00:05:40.081 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.081 suites 1 1 n/a 0 0 00:05:40.081 tests 3 3 3 0 0 00:05:40.081 asserts 8097 8097 8097 0 n/a 00:05:40.081 00:05:40.081 Elapsed time = 0.008 seconds 00:05:40.081 05:24:43 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:05:40.081 00:05:40.081 00:05:40.081 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.081 http://cunit.sourceforge.net/ 00:05:40.081 00:05:40.081 00:05:40.081 Suite: raid1 00:05:40.081 Test: test_raid1_start ...passed 00:05:40.081 Test: test_raid1_read_balancing ...passed 00:05:40.081 00:05:40.081 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.081 suites 1 1 n/a 0 0 00:05:40.081 tests 2 2 2 0 0 00:05:40.081 asserts 2856 2856 2856 0 n/a 00:05:40.081 00:05:40.081 Elapsed time = 0.004 seconds 00:05:40.081 05:24:43 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:05:40.081 00:05:40.081 00:05:40.081 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.081 http://cunit.sourceforge.net/ 00:05:40.081 00:05:40.081 00:05:40.081 Suite: zone 00:05:40.081 Test: test_zone_get_operation ...passed 00:05:40.081 Test: test_bdev_zone_get_info ...passed 00:05:40.081 Test: test_bdev_zone_management ...passed 00:05:40.081 Test: test_bdev_zone_append ...passed 00:05:40.081 Test: test_bdev_zone_append_with_md ...passed 00:05:40.081 Test: test_bdev_zone_appendv ...passed 00:05:40.081 Test: test_bdev_zone_appendv_with_md ...passed 00:05:40.081 Test: test_bdev_io_get_append_location ...passed 00:05:40.081 00:05:40.081 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.081 suites 1 1 n/a 0 0 00:05:40.081 tests 8 8 8 0 0 00:05:40.081 asserts 94 94 94 0 n/a 00:05:40.081 00:05:40.081 Elapsed time = 0.001 seconds 00:05:40.081 05:24:43 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:05:40.081 00:05:40.081 00:05:40.081 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.081 http://cunit.sourceforge.net/ 00:05:40.081 00:05:40.081 00:05:40.081 Suite: gpt_parse 00:05:40.081 Test: test_parse_mbr_and_primary ...[2024-10-07 05:24:43.897811] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:40.081 [2024-10-07 05:24:43.898114] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:40.081 [2024-10-07 05:24:43.898178] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:40.081 [2024-10-07 05:24:43.898257] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:40.081 [2024-10-07 05:24:43.898314] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:40.081 [2024-10-07 05:24:43.898407] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:40.081 passed 00:05:40.081 Test: test_parse_secondary ...[2024-10-07 05:24:43.899228] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:40.081 [2024-10-07 05:24:43.899293] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:40.081 [2024-10-07 05:24:43.899340] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:40.081 [2024-10-07 05:24:43.899383] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:40.081 passed 00:05:40.081 Test: test_check_mbr ...[2024-10-07 05:24:43.900148] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:40.081 passed 00:05:40.081 Test: test_read_header ...[2024-10-07 05:24:43.900207] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:40.081 [2024-10-07 05:24:43.900277] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:05:40.081 [2024-10-07 05:24:43.900377] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:05:40.081 [2024-10-07 05:24:43.900468] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:05:40.081 [2024-10-07 05:24:43.900518] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:05:40.081 [2024-10-07 05:24:43.900597] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:05:40.081 passed 00:05:40.081 Test: test_read_partitions ...[2024-10-07 05:24:43.900648] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:05:40.081 [2024-10-07 05:24:43.900716] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:05:40.081 [2024-10-07 05:24:43.900776] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:05:40.081 [2024-10-07 05:24:43.900827] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:05:40.081 [2024-10-07 05:24:43.900857] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:05:40.081 [2024-10-07 05:24:43.901266] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:05:40.081 passed 00:05:40.081 00:05:40.081 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.081 suites 1 1 n/a 0 0 00:05:40.081 tests 5 5 5 0 0 00:05:40.081 asserts 33 33 33 0 n/a 00:05:40.081 00:05:40.081 Elapsed time = 0.004 seconds 00:05:40.081 05:24:43 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:05:40.081 00:05:40.081 00:05:40.081 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.081 http://cunit.sourceforge.net/ 00:05:40.081 00:05:40.081 00:05:40.081 Suite: bdev_part 00:05:40.081 Test: part_test ...[2024-10-07 05:24:43.937804] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:05:40.081 passed 00:05:40.081 Test: part_free_test ...passed 00:05:40.081 Test: part_get_io_channel_test ...passed 00:05:40.081 Test: part_construct_ext ...passed 00:05:40.081 00:05:40.081 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.081 suites 1 1 n/a 0 0 00:05:40.081 tests 4 4 4 0 0 00:05:40.081 asserts 48 48 48 0 n/a 00:05:40.081 00:05:40.081 Elapsed time = 0.056 seconds 00:05:40.081 05:24:44 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:05:40.081 00:05:40.081 00:05:40.081 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.081 http://cunit.sourceforge.net/ 00:05:40.081 00:05:40.081 00:05:40.081 Suite: scsi_nvme_suite 00:05:40.081 Test: scsi_nvme_translate_test ...passed 00:05:40.081 00:05:40.081 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.082 suites 1 1 n/a 0 0 00:05:40.082 tests 1 1 1 0 0 00:05:40.082 asserts 104 104 104 0 n/a 00:05:40.082 00:05:40.082 Elapsed time = 0.000 seconds 00:05:40.082 05:24:44 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:05:40.342 00:05:40.342 00:05:40.342 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.342 http://cunit.sourceforge.net/ 00:05:40.342 00:05:40.342 00:05:40.342 Suite: lvol 00:05:40.342 Test: ut_lvs_init ...[2024-10-07 05:24:44.066714] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:05:40.342 [2024-10-07 05:24:44.067201] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:05:40.342 passed 00:05:40.342 Test: ut_lvol_init ...passed 00:05:40.342 Test: ut_lvol_snapshot ...passed 00:05:40.342 Test: ut_lvol_clone ...passed 00:05:40.342 Test: ut_lvs_destroy ...passed 00:05:40.342 Test: ut_lvs_unload ...passed 00:05:40.342 Test: ut_lvol_resize ...[2024-10-07 05:24:44.068850] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:05:40.342 passed 00:05:40.342 Test: ut_lvol_set_read_only ...passed 00:05:40.342 Test: ut_lvol_hotremove ...passed 00:05:40.342 Test: ut_vbdev_lvol_get_io_channel ...passed 00:05:40.342 Test: ut_vbdev_lvol_io_type_supported ...passed 00:05:40.342 Test: ut_lvol_read_write ...passed 00:05:40.342 Test: ut_vbdev_lvol_submit_request ...passed 00:05:40.342 Test: ut_lvol_examine_config ...passed 00:05:40.342 Test: ut_lvol_examine_disk ...[2024-10-07 05:24:44.069677] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:05:40.342 passed 00:05:40.342 Test: ut_lvol_rename ...[2024-10-07 05:24:44.070766] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:05:40.342 [2024-10-07 05:24:44.070899] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:05:40.342 passed 00:05:40.342 Test: ut_bdev_finish ...passed 00:05:40.342 Test: ut_lvs_rename ...passed 00:05:40.342 Test: ut_lvol_seek ...passed 00:05:40.342 Test: ut_esnap_dev_create ...[2024-10-07 05:24:44.071675] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:05:40.342 [2024-10-07 05:24:44.071772] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:05:40.342 [2024-10-07 05:24:44.071819] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:05:40.342 [2024-10-07 05:24:44.071872] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:05:40.342 passed 00:05:40.342 Test: ut_lvol_esnap_clone_bad_args ...[2024-10-07 05:24:44.072025] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:05:40.342 [2024-10-07 05:24:44.072079] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:05:40.342 passed 00:05:40.342 00:05:40.342 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.342 suites 1 1 n/a 0 0 00:05:40.342 tests 21 21 21 0 0 00:05:40.342 asserts 712 712 712 0 n/a 00:05:40.342 00:05:40.342 Elapsed time = 0.006 seconds 00:05:40.342 05:24:44 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:05:40.342 00:05:40.342 00:05:40.342 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.342 http://cunit.sourceforge.net/ 00:05:40.342 00:05:40.342 00:05:40.342 Suite: zone_block 00:05:40.342 Test: test_zone_block_create ...passed 00:05:40.342 Test: test_zone_block_create_invalid ...[2024-10-07 05:24:44.127491] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:05:40.342 [2024-10-07 05:24:44.127834] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-10-07 05:24:44.128032] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:05:40.343 [2024-10-07 05:24:44.128103] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-10-07 05:24:44.128284] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:05:40.343 [2024-10-07 05:24:44.128339] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-10-07 05:24:44.128458] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:05:40.343 [2024-10-07 05:24:44.128519] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:05:40.343 Test: test_get_zone_info ...[2024-10-07 05:24:44.129137] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:40.343 [2024-10-07 05:24:44.129248] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:40.343 [2024-10-07 05:24:44.129322] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:40.343 passed 00:05:40.343 Test: test_supported_io_types ...passed 00:05:40.343 Test: test_reset_zone ...[2024-10-07 05:24:44.130210] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:40.343 [2024-10-07 05:24:44.130296] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:40.343 passed 00:05:40.343 Test: test_open_zone ...[2024-10-07 05:24:44.130841] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:40.343 [2024-10-07 05:24:44.131569] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:40.343 [2024-10-07 05:24:44.131652] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:40.343 passed 00:05:40.343 Test: test_zone_write ...[2024-10-07 05:24:44.132165] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:40.343 [2024-10-07 05:24:44.132236] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:40.343 [2024-10-07 05:24:44.132303] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:40.343 [2024-10-07 05:24:44.132362] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:40.343 [2024-10-07 05:24:44.138260] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:05:40.343 [2024-10-07 05:24:44.138316] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:40.343 [2024-10-07 05:24:44.138422] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:05:40.343 [2024-10-07 05:24:44.138461] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:40.343 [2024-10-07 05:24:44.144339] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:40.343 passed 00:05:40.343 Test: test_zone_read ...[2024-10-07 05:24:44.144413] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:40.343 [2024-10-07 05:24:44.144947] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:05:40.343 [2024-10-07 05:24:44.145007] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:40.343 [2024-10-07 05:24:44.145103] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:05:40.343 [2024-10-07 05:24:44.145155] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:40.343 [2024-10-07 05:24:44.145661] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:05:40.343 [2024-10-07 05:24:44.145717] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:40.343 passed 00:05:40.343 Test: test_close_zone ...[2024-10-07 05:24:44.146168] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:40.343 [2024-10-07 05:24:44.146263] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:40.343 [2024-10-07 05:24:44.146527] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:40.343 [2024-10-07 05:24:44.146586] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:40.343 passed 00:05:40.343 Test: test_finish_zone ...[2024-10-07 05:24:44.147306] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:40.343 [2024-10-07 05:24:44.147387] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:40.343 passed 00:05:40.343 Test: test_append_zone ...[2024-10-07 05:24:44.147822] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:40.343 [2024-10-07 05:24:44.147881] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:40.343 [2024-10-07 05:24:44.147946] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:40.343 [2024-10-07 05:24:44.147985] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:40.343 [2024-10-07 05:24:44.159743] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:40.343 passed 00:05:40.343 00:05:40.343 [2024-10-07 05:24:44.159808] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:40.343 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.343 suites 1 1 n/a 0 0 00:05:40.343 tests 11 11 11 0 0 00:05:40.343 asserts 3437 3437 3437 0 n/a 00:05:40.343 00:05:40.343 Elapsed time = 0.034 seconds 00:05:40.343 05:24:44 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:05:40.343 00:05:40.343 00:05:40.343 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.343 http://cunit.sourceforge.net/ 00:05:40.343 00:05:40.343 00:05:40.343 Suite: bdev 00:05:40.343 Test: basic ...[2024-10-07 05:24:44.252887] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55c15b859401): Operation not permitted (rc=-1) 00:05:40.343 [2024-10-07 05:24:44.253130] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x55c15b8593c0): Operation not permitted (rc=-1) 00:05:40.343 [2024-10-07 05:24:44.253188] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55c15b859401): Operation not permitted (rc=-1) 00:05:40.343 passed 00:05:40.343 Test: unregister_and_close ...passed 00:05:40.602 Test: unregister_and_close_different_threads ...passed 00:05:40.602 Test: basic_qos ...passed 00:05:40.602 Test: put_channel_during_reset ...passed 00:05:40.602 Test: aborted_reset ...passed 00:05:40.602 Test: aborted_reset_no_outstanding_io ...passed 00:05:40.602 Test: io_during_reset ...passed 00:05:40.602 Test: reset_completions ...passed 00:05:40.602 Test: io_during_qos_queue ...passed 00:05:40.861 Test: io_during_qos_reset ...passed 00:05:40.861 Test: enomem ...passed 00:05:40.861 Test: enomem_multi_bdev ...passed 00:05:40.861 Test: enomem_multi_bdev_unregister ...passed 00:05:40.861 Test: enomem_multi_io_target ...passed 00:05:40.861 Test: qos_dynamic_enable ...passed 00:05:40.861 Test: bdev_histograms_mt ...passed 00:05:40.861 Test: bdev_set_io_timeout_mt ...[2024-10-07 05:24:44.808824] thread.c: 465:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:05:40.861 passed 00:05:40.861 Test: lock_lba_range_then_submit_io ...[2024-10-07 05:24:44.823881] thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x55c15b859380 already registered (old:0x6130000003c0 new:0x613000000c80) 00:05:41.120 passed 00:05:41.120 Test: unregister_during_reset ...passed 00:05:41.120 Test: event_notify_and_close ...passed 00:05:41.120 Test: unregister_and_qos_poller ...passed 00:05:41.120 Suite: bdev_wrong_thread 00:05:41.120 Test: spdk_bdev_register_wt ...[2024-10-07 05:24:44.933212] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8364:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x618000001480 (0x618000001480) 00:05:41.120 passed 00:05:41.120 Test: spdk_bdev_examine_wt ...[2024-10-07 05:24:44.933446] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 793:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000001480 (0x618000001480) 00:05:41.120 passed 00:05:41.120 00:05:41.120 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.120 suites 2 2 n/a 0 0 00:05:41.120 tests 24 24 24 0 0 00:05:41.120 asserts 621 621 621 0 n/a 00:05:41.120 00:05:41.120 Elapsed time = 0.699 seconds 00:05:41.120 00:05:41.120 real 0m3.334s 00:05:41.120 user 0m1.481s 00:05:41.120 sys 0m1.858s 00:05:41.120 05:24:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.120 05:24:44 -- common/autotest_common.sh@10 -- # set +x 00:05:41.120 ************************************ 00:05:41.120 END TEST unittest_bdev 00:05:41.120 ************************************ 00:05:41.120 05:24:45 -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:41.120 05:24:45 -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:41.120 05:24:45 -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:41.120 05:24:45 -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:41.120 05:24:45 -- unit/unittest.sh@228 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:05:41.120 05:24:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:41.120 05:24:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.120 05:24:45 -- common/autotest_common.sh@10 -- # set +x 00:05:41.120 ************************************ 00:05:41.120 START TEST unittest_bdev_raid5f 00:05:41.120 ************************************ 00:05:41.120 05:24:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:05:41.120 00:05:41.120 00:05:41.120 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.120 http://cunit.sourceforge.net/ 00:05:41.120 00:05:41.120 00:05:41.120 Suite: raid5f 00:05:41.120 Test: test_raid5f_start ...passed 00:05:41.687 Test: test_raid5f_submit_read_request ...passed 00:05:41.946 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:05:45.256 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:06:00.132 Test: test_raid5f_chunk_write_error ...passed 00:06:05.402 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:06:07.305 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:06:29.238 Test: test_raid5f_submit_read_request_degraded ...passed 00:06:29.238 00:06:29.238 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.238 suites 1 1 n/a 0 0 00:06:29.238 tests 8 8 8 0 0 00:06:29.238 asserts 351864 351864 351864 0 n/a 00:06:29.238 00:06:29.238 Elapsed time = 48.093 seconds 00:06:29.238 00:06:29.238 real 0m48.167s 00:06:29.238 user 0m45.749s 00:06:29.238 sys 0m2.417s 00:06:29.238 05:25:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.238 ************************************ 00:06:29.238 END TEST unittest_bdev_raid5f 00:06:29.238 ************************************ 00:06:29.238 05:25:33 -- common/autotest_common.sh@10 -- # set +x 00:06:29.497 05:25:33 -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:06:29.497 05:25:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:29.497 05:25:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:29.497 05:25:33 -- common/autotest_common.sh@10 -- # set +x 00:06:29.497 ************************************ 00:06:29.497 START TEST unittest_blob_blobfs 00:06:29.497 ************************************ 00:06:29.497 05:25:33 -- common/autotest_common.sh@1104 -- # unittest_blob 00:06:29.497 05:25:33 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:06:29.497 05:25:33 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:06:29.497 00:06:29.497 00:06:29.497 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.497 http://cunit.sourceforge.net/ 00:06:29.497 00:06:29.497 00:06:29.497 Suite: blob_nocopy_noextent 00:06:29.497 Test: blob_init ...[2024-10-07 05:25:33.286816] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:29.497 passed 00:06:29.497 Test: blob_thin_provision ...passed 00:06:29.497 Test: blob_read_only ...passed 00:06:29.497 Test: bs_load ...[2024-10-07 05:25:33.408601] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:29.497 passed 00:06:29.497 Test: bs_load_custom_cluster_size ...passed 00:06:29.497 Test: bs_load_after_failed_grow ...passed 00:06:29.497 Test: bs_cluster_sz ...[2024-10-07 05:25:33.440995] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:29.497 [2024-10-07 05:25:33.441653] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:29.497 [2024-10-07 05:25:33.441992] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:29.497 passed 00:06:29.497 Test: bs_resize_md ...passed 00:06:29.756 Test: bs_destroy ...passed 00:06:29.756 Test: bs_type ...passed 00:06:29.756 Test: bs_super_block ...passed 00:06:29.756 Test: bs_test_recover_cluster_count ...passed 00:06:29.756 Test: bs_grow_live ...passed 00:06:29.756 Test: bs_grow_live_no_space ...passed 00:06:29.756 Test: bs_test_grow ...passed 00:06:29.756 Test: blob_serialize_test ...passed 00:06:29.756 Test: super_block_crc ...passed 00:06:29.756 Test: blob_thin_prov_write_count_io ...passed 00:06:29.756 Test: bs_load_iter_test ...passed 00:06:29.756 Test: blob_relations ...[2024-10-07 05:25:33.616856] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:29.756 [2024-10-07 05:25:33.617221] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:29.756 [2024-10-07 05:25:33.618562] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:29.756 [2024-10-07 05:25:33.618785] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:29.756 passed 00:06:29.756 Test: blob_relations2 ...[2024-10-07 05:25:33.634252] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:29.756 [2024-10-07 05:25:33.634618] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:29.756 [2024-10-07 05:25:33.634860] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:29.756 [2024-10-07 05:25:33.635071] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:29.756 [2024-10-07 05:25:33.636973] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:29.756 [2024-10-07 05:25:33.637179] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:29.756 [2024-10-07 05:25:33.637766] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:29.756 [2024-10-07 05:25:33.637966] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:29.756 passed 00:06:29.756 Test: blob_relations3 ...passed 00:06:30.016 Test: blobstore_clean_power_failure ...passed 00:06:30.016 Test: blob_delete_snapshot_power_failure ...[2024-10-07 05:25:33.776829] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:30.016 [2024-10-07 05:25:33.788117] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:30.016 [2024-10-07 05:25:33.788410] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:30.016 [2024-10-07 05:25:33.788498] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:30.016 [2024-10-07 05:25:33.799605] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:30.016 [2024-10-07 05:25:33.799876] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:30.016 [2024-10-07 05:25:33.799964] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:30.016 [2024-10-07 05:25:33.800094] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:30.016 [2024-10-07 05:25:33.811260] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:30.016 [2024-10-07 05:25:33.811627] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:30.016 [2024-10-07 05:25:33.822774] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:30.016 [2024-10-07 05:25:33.823085] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:30.016 [2024-10-07 05:25:33.834894] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:30.016 [2024-10-07 05:25:33.835232] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:30.016 passed 00:06:30.016 Test: blob_create_snapshot_power_failure ...[2024-10-07 05:25:33.869233] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:30.016 [2024-10-07 05:25:33.890862] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:30.016 [2024-10-07 05:25:33.902037] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:30.016 passed 00:06:30.016 Test: blob_io_unit ...passed 00:06:30.016 Test: blob_io_unit_compatibility ...passed 00:06:30.016 Test: blob_ext_md_pages ...passed 00:06:30.274 Test: blob_esnap_io_4096_4096 ...passed 00:06:30.274 Test: blob_esnap_io_512_512 ...passed 00:06:30.274 Test: blob_esnap_io_4096_512 ...passed 00:06:30.274 Test: blob_esnap_io_512_4096 ...passed 00:06:30.274 Suite: blob_bs_nocopy_noextent 00:06:30.274 Test: blob_open ...passed 00:06:30.274 Test: blob_create ...[2024-10-07 05:25:34.122192] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:30.274 passed 00:06:30.274 Test: blob_create_loop ...passed 00:06:30.274 Test: blob_create_fail ...[2024-10-07 05:25:34.208123] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:30.274 passed 00:06:30.533 Test: blob_create_internal ...passed 00:06:30.533 Test: blob_create_zero_extent ...passed 00:06:30.533 Test: blob_snapshot ...passed 00:06:30.533 Test: blob_clone ...passed 00:06:30.533 Test: blob_inflate ...[2024-10-07 05:25:34.375329] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:30.533 passed 00:06:30.533 Test: blob_delete ...passed 00:06:30.533 Test: blob_resize_test ...[2024-10-07 05:25:34.436232] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:30.533 passed 00:06:30.533 Test: channel_ops ...passed 00:06:30.533 Test: blob_super ...passed 00:06:30.792 Test: blob_rw_verify_iov ...passed 00:06:30.792 Test: blob_unmap ...passed 00:06:30.792 Test: blob_iter ...passed 00:06:30.792 Test: blob_parse_md ...passed 00:06:30.792 Test: bs_load_pending_removal ...passed 00:06:30.792 Test: bs_unload ...[2024-10-07 05:25:34.670013] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:30.792 passed 00:06:30.792 Test: bs_usable_clusters ...passed 00:06:30.792 Test: blob_crc ...[2024-10-07 05:25:34.729726] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:30.792 [2024-10-07 05:25:34.730138] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:30.792 passed 00:06:31.050 Test: blob_flags ...passed 00:06:31.050 Test: bs_version ...passed 00:06:31.050 Test: blob_set_xattrs_test ...[2024-10-07 05:25:34.820022] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:31.050 [2024-10-07 05:25:34.820376] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:31.050 passed 00:06:31.050 Test: blob_thin_prov_alloc ...passed 00:06:31.050 Test: blob_insert_cluster_msg_test ...passed 00:06:31.050 Test: blob_thin_prov_rw ...passed 00:06:31.309 Test: blob_thin_prov_rle ...passed 00:06:31.309 Test: blob_thin_prov_rw_iov ...passed 00:06:31.309 Test: blob_snapshot_rw ...passed 00:06:31.309 Test: blob_snapshot_rw_iov ...passed 00:06:31.567 Test: blob_inflate_rw ...passed 00:06:31.567 Test: blob_snapshot_freeze_io ...passed 00:06:31.567 Test: blob_operation_split_rw ...passed 00:06:31.825 Test: blob_operation_split_rw_iov ...passed 00:06:31.826 Test: blob_simultaneous_operations ...[2024-10-07 05:25:35.640262] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:31.826 [2024-10-07 05:25:35.640604] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:31.826 [2024-10-07 05:25:35.641693] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:31.826 [2024-10-07 05:25:35.641847] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:31.826 [2024-10-07 05:25:35.651815] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:31.826 [2024-10-07 05:25:35.652017] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:31.826 [2024-10-07 05:25:35.652193] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:31.826 [2024-10-07 05:25:35.652406] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:31.826 passed 00:06:31.826 Test: blob_persist_test ...passed 00:06:31.826 Test: blob_decouple_snapshot ...passed 00:06:31.826 Test: blob_seek_io_unit ...passed 00:06:32.084 Test: blob_nested_freezes ...passed 00:06:32.084 Suite: blob_blob_nocopy_noextent 00:06:32.084 Test: blob_write ...passed 00:06:32.084 Test: blob_read ...passed 00:06:32.084 Test: blob_rw_verify ...passed 00:06:32.084 Test: blob_rw_verify_iov_nomem ...passed 00:06:32.084 Test: blob_rw_iov_read_only ...passed 00:06:32.084 Test: blob_xattr ...passed 00:06:32.084 Test: blob_dirty_shutdown ...passed 00:06:32.084 Test: blob_is_degraded ...passed 00:06:32.084 Suite: blob_esnap_bs_nocopy_noextent 00:06:32.342 Test: blob_esnap_create ...passed 00:06:32.342 Test: blob_esnap_thread_add_remove ...passed 00:06:32.342 Test: blob_esnap_clone_snapshot ...passed 00:06:32.342 Test: blob_esnap_clone_inflate ...passed 00:06:32.342 Test: blob_esnap_clone_decouple ...passed 00:06:32.342 Test: blob_esnap_clone_reload ...passed 00:06:32.342 Test: blob_esnap_hotplug ...passed 00:06:32.342 Suite: blob_nocopy_extent 00:06:32.342 Test: blob_init ...[2024-10-07 05:25:36.261367] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:32.342 passed 00:06:32.342 Test: blob_thin_provision ...passed 00:06:32.342 Test: blob_read_only ...passed 00:06:32.342 Test: bs_load ...[2024-10-07 05:25:36.305124] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:32.342 passed 00:06:32.600 Test: bs_load_custom_cluster_size ...passed 00:06:32.600 Test: bs_load_after_failed_grow ...passed 00:06:32.600 Test: bs_cluster_sz ...[2024-10-07 05:25:36.329564] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:32.600 [2024-10-07 05:25:36.329877] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:32.600 [2024-10-07 05:25:36.330048] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:32.600 passed 00:06:32.600 Test: bs_resize_md ...passed 00:06:32.600 Test: bs_destroy ...passed 00:06:32.600 Test: bs_type ...passed 00:06:32.600 Test: bs_super_block ...passed 00:06:32.600 Test: bs_test_recover_cluster_count ...passed 00:06:32.600 Test: bs_grow_live ...passed 00:06:32.600 Test: bs_grow_live_no_space ...passed 00:06:32.600 Test: bs_test_grow ...passed 00:06:32.600 Test: blob_serialize_test ...passed 00:06:32.600 Test: super_block_crc ...passed 00:06:32.600 Test: blob_thin_prov_write_count_io ...passed 00:06:32.600 Test: bs_load_iter_test ...passed 00:06:32.600 Test: blob_relations ...[2024-10-07 05:25:36.469993] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:32.600 [2024-10-07 05:25:36.470231] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:32.600 [2024-10-07 05:25:36.471278] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:32.600 [2024-10-07 05:25:36.471488] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:32.600 passed 00:06:32.601 Test: blob_relations2 ...[2024-10-07 05:25:36.485451] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:32.601 [2024-10-07 05:25:36.485637] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:32.601 [2024-10-07 05:25:36.485827] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:32.601 [2024-10-07 05:25:36.485950] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:32.601 [2024-10-07 05:25:36.487391] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:32.601 [2024-10-07 05:25:36.487569] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:32.601 [2024-10-07 05:25:36.488140] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:32.601 [2024-10-07 05:25:36.488321] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:32.601 passed 00:06:32.601 Test: blob_relations3 ...passed 00:06:32.859 Test: blobstore_clean_power_failure ...passed 00:06:32.859 Test: blob_delete_snapshot_power_failure ...[2024-10-07 05:25:36.627532] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:32.859 [2024-10-07 05:25:36.638806] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:32.859 [2024-10-07 05:25:36.650062] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:32.859 [2024-10-07 05:25:36.650367] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:32.859 [2024-10-07 05:25:36.650437] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:32.859 [2024-10-07 05:25:36.661698] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:32.859 [2024-10-07 05:25:36.661974] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:32.859 [2024-10-07 05:25:36.662045] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:32.859 [2024-10-07 05:25:36.662164] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:32.859 [2024-10-07 05:25:36.673376] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:32.859 [2024-10-07 05:25:36.673650] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:32.859 [2024-10-07 05:25:36.673716] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:32.859 [2024-10-07 05:25:36.673857] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:32.859 [2024-10-07 05:25:36.685160] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:32.859 [2024-10-07 05:25:36.685463] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:32.859 [2024-10-07 05:25:36.696791] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:32.859 [2024-10-07 05:25:36.697103] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:32.859 [2024-10-07 05:25:36.708507] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:32.859 [2024-10-07 05:25:36.708799] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:32.859 passed 00:06:32.859 Test: blob_create_snapshot_power_failure ...[2024-10-07 05:25:36.742419] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:32.859 [2024-10-07 05:25:36.753428] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:32.859 [2024-10-07 05:25:36.774705] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:32.859 [2024-10-07 05:25:36.786098] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:32.859 passed 00:06:32.859 Test: blob_io_unit ...passed 00:06:33.118 Test: blob_io_unit_compatibility ...passed 00:06:33.118 Test: blob_ext_md_pages ...passed 00:06:33.118 Test: blob_esnap_io_4096_4096 ...passed 00:06:33.118 Test: blob_esnap_io_512_512 ...passed 00:06:33.118 Test: blob_esnap_io_4096_512 ...passed 00:06:33.118 Test: blob_esnap_io_512_4096 ...passed 00:06:33.118 Suite: blob_bs_nocopy_extent 00:06:33.118 Test: blob_open ...passed 00:06:33.118 Test: blob_create ...[2024-10-07 05:25:37.000782] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:33.118 passed 00:06:33.118 Test: blob_create_loop ...passed 00:06:33.377 Test: blob_create_fail ...[2024-10-07 05:25:37.093560] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:33.377 passed 00:06:33.377 Test: blob_create_internal ...passed 00:06:33.377 Test: blob_create_zero_extent ...passed 00:06:33.377 Test: blob_snapshot ...passed 00:06:33.377 Test: blob_clone ...passed 00:06:33.377 Test: blob_inflate ...[2024-10-07 05:25:37.252414] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:33.377 passed 00:06:33.377 Test: blob_delete ...passed 00:06:33.377 Test: blob_resize_test ...[2024-10-07 05:25:37.313865] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:33.377 passed 00:06:33.636 Test: channel_ops ...passed 00:06:33.636 Test: blob_super ...passed 00:06:33.636 Test: blob_rw_verify_iov ...passed 00:06:33.636 Test: blob_unmap ...passed 00:06:33.636 Test: blob_iter ...passed 00:06:33.636 Test: blob_parse_md ...passed 00:06:33.636 Test: bs_load_pending_removal ...passed 00:06:33.636 Test: bs_unload ...[2024-10-07 05:25:37.555089] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:33.636 passed 00:06:33.636 Test: bs_usable_clusters ...passed 00:06:33.894 Test: blob_crc ...[2024-10-07 05:25:37.615438] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:33.894 [2024-10-07 05:25:37.615803] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:33.894 passed 00:06:33.894 Test: blob_flags ...passed 00:06:33.894 Test: bs_version ...passed 00:06:33.894 Test: blob_set_xattrs_test ...[2024-10-07 05:25:37.705225] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:33.894 [2024-10-07 05:25:37.705574] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:33.894 passed 00:06:33.894 Test: blob_thin_prov_alloc ...passed 00:06:33.894 Test: blob_insert_cluster_msg_test ...passed 00:06:34.152 Test: blob_thin_prov_rw ...passed 00:06:34.152 Test: blob_thin_prov_rle ...passed 00:06:34.152 Test: blob_thin_prov_rw_iov ...passed 00:06:34.152 Test: blob_snapshot_rw ...passed 00:06:34.152 Test: blob_snapshot_rw_iov ...passed 00:06:34.410 Test: blob_inflate_rw ...passed 00:06:34.410 Test: blob_snapshot_freeze_io ...passed 00:06:34.669 Test: blob_operation_split_rw ...passed 00:06:34.669 Test: blob_operation_split_rw_iov ...passed 00:06:34.669 Test: blob_simultaneous_operations ...[2024-10-07 05:25:38.566649] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:34.669 [2024-10-07 05:25:38.566916] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:34.669 [2024-10-07 05:25:38.568034] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:34.669 [2024-10-07 05:25:38.568217] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:34.669 [2024-10-07 05:25:38.579264] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:34.669 [2024-10-07 05:25:38.579433] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:34.669 [2024-10-07 05:25:38.579586] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:34.669 [2024-10-07 05:25:38.579830] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:34.669 passed 00:06:34.669 Test: blob_persist_test ...passed 00:06:34.928 Test: blob_decouple_snapshot ...passed 00:06:34.928 Test: blob_seek_io_unit ...passed 00:06:34.928 Test: blob_nested_freezes ...passed 00:06:34.928 Suite: blob_blob_nocopy_extent 00:06:34.928 Test: blob_write ...passed 00:06:34.928 Test: blob_read ...passed 00:06:34.928 Test: blob_rw_verify ...passed 00:06:35.204 Test: blob_rw_verify_iov_nomem ...passed 00:06:35.204 Test: blob_rw_iov_read_only ...passed 00:06:35.204 Test: blob_xattr ...passed 00:06:35.204 Test: blob_dirty_shutdown ...passed 00:06:35.204 Test: blob_is_degraded ...passed 00:06:35.204 Suite: blob_esnap_bs_nocopy_extent 00:06:35.204 Test: blob_esnap_create ...passed 00:06:35.204 Test: blob_esnap_thread_add_remove ...passed 00:06:35.204 Test: blob_esnap_clone_snapshot ...passed 00:06:35.475 Test: blob_esnap_clone_inflate ...passed 00:06:35.475 Test: blob_esnap_clone_decouple ...passed 00:06:35.475 Test: blob_esnap_clone_reload ...passed 00:06:35.475 Test: blob_esnap_hotplug ...passed 00:06:35.475 Suite: blob_copy_noextent 00:06:35.475 Test: blob_init ...[2024-10-07 05:25:39.279020] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:35.475 passed 00:06:35.475 Test: blob_thin_provision ...passed 00:06:35.475 Test: blob_read_only ...passed 00:06:35.475 Test: bs_load ...[2024-10-07 05:25:39.344088] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:35.475 passed 00:06:35.475 Test: bs_load_custom_cluster_size ...passed 00:06:35.475 Test: bs_load_after_failed_grow ...passed 00:06:35.475 Test: bs_cluster_sz ...[2024-10-07 05:25:39.388837] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:35.475 [2024-10-07 05:25:39.389323] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:35.475 [2024-10-07 05:25:39.389579] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:35.476 passed 00:06:35.476 Test: bs_resize_md ...passed 00:06:35.735 Test: bs_destroy ...passed 00:06:35.735 Test: bs_type ...passed 00:06:35.735 Test: bs_super_block ...passed 00:06:35.735 Test: bs_test_recover_cluster_count ...passed 00:06:35.735 Test: bs_grow_live ...passed 00:06:35.735 Test: bs_grow_live_no_space ...passed 00:06:35.735 Test: bs_test_grow ...passed 00:06:35.735 Test: blob_serialize_test ...passed 00:06:35.735 Test: super_block_crc ...passed 00:06:35.735 Test: blob_thin_prov_write_count_io ...passed 00:06:35.735 Test: bs_load_iter_test ...passed 00:06:35.735 Test: blob_relations ...[2024-10-07 05:25:39.651109] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:35.735 [2024-10-07 05:25:39.651451] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:35.735 [2024-10-07 05:25:39.652366] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:35.735 [2024-10-07 05:25:39.652546] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:35.735 passed 00:06:35.735 Test: blob_relations2 ...[2024-10-07 05:25:39.674585] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:35.735 [2024-10-07 05:25:39.674937] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:35.735 [2024-10-07 05:25:39.675013] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:35.735 [2024-10-07 05:25:39.675248] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:35.735 [2024-10-07 05:25:39.676373] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:35.735 [2024-10-07 05:25:39.676549] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:35.735 [2024-10-07 05:25:39.676897] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:35.735 [2024-10-07 05:25:39.677048] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:35.735 passed 00:06:35.735 Test: blob_relations3 ...passed 00:06:35.994 Test: blobstore_clean_power_failure ...passed 00:06:35.994 Test: blob_delete_snapshot_power_failure ...[2024-10-07 05:25:39.950656] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:36.253 [2024-10-07 05:25:39.972243] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:36.253 [2024-10-07 05:25:39.972716] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:36.253 [2024-10-07 05:25:39.972803] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:36.253 [2024-10-07 05:25:39.993992] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:36.253 [2024-10-07 05:25:39.994367] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:36.253 [2024-10-07 05:25:39.994454] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:36.253 [2024-10-07 05:25:39.994728] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:36.253 [2024-10-07 05:25:40.015875] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:36.253 [2024-10-07 05:25:40.016265] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:36.253 [2024-10-07 05:25:40.036914] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:36.254 [2024-10-07 05:25:40.037317] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:36.254 [2024-10-07 05:25:40.058128] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:36.254 [2024-10-07 05:25:40.058654] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:36.254 passed 00:06:36.254 Test: blob_create_snapshot_power_failure ...[2024-10-07 05:25:40.119403] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:36.254 [2024-10-07 05:25:40.158907] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:36.254 [2024-10-07 05:25:40.179525] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:36.512 passed 00:06:36.512 Test: blob_io_unit ...passed 00:06:36.512 Test: blob_io_unit_compatibility ...passed 00:06:36.512 Test: blob_ext_md_pages ...passed 00:06:36.512 Test: blob_esnap_io_4096_4096 ...passed 00:06:36.512 Test: blob_esnap_io_512_512 ...passed 00:06:36.512 Test: blob_esnap_io_4096_512 ...passed 00:06:36.771 Test: blob_esnap_io_512_4096 ...passed 00:06:36.771 Suite: blob_bs_copy_noextent 00:06:36.771 Test: blob_open ...passed 00:06:36.771 Test: blob_create ...[2024-10-07 05:25:40.595296] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:36.771 passed 00:06:36.771 Test: blob_create_loop ...passed 00:06:36.771 Test: blob_create_fail ...[2024-10-07 05:25:40.745034] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:37.031 passed 00:06:37.031 Test: blob_create_internal ...passed 00:06:37.031 Test: blob_create_zero_extent ...passed 00:06:37.031 Test: blob_snapshot ...passed 00:06:37.031 Test: blob_clone ...passed 00:06:37.290 Test: blob_inflate ...[2024-10-07 05:25:41.049905] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:37.290 passed 00:06:37.290 Test: blob_delete ...passed 00:06:37.290 Test: blob_resize_test ...[2024-10-07 05:25:41.161595] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:37.290 passed 00:06:37.290 Test: channel_ops ...passed 00:06:37.549 Test: blob_super ...passed 00:06:37.549 Test: blob_rw_verify_iov ...passed 00:06:37.549 Test: blob_unmap ...passed 00:06:37.549 Test: blob_iter ...passed 00:06:37.808 Test: blob_parse_md ...passed 00:06:37.808 Test: bs_load_pending_removal ...passed 00:06:37.808 Test: bs_unload ...[2024-10-07 05:25:41.627944] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:37.808 passed 00:06:37.808 Test: bs_usable_clusters ...passed 00:06:37.808 Test: blob_crc ...[2024-10-07 05:25:41.746159] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:37.808 [2024-10-07 05:25:41.746798] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:37.808 passed 00:06:38.067 Test: blob_flags ...passed 00:06:38.067 Test: bs_version ...passed 00:06:38.067 Test: blob_set_xattrs_test ...[2024-10-07 05:25:41.929804] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:38.067 [2024-10-07 05:25:41.930209] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:38.067 passed 00:06:38.326 Test: blob_thin_prov_alloc ...passed 00:06:38.326 Test: blob_insert_cluster_msg_test ...passed 00:06:38.326 Test: blob_thin_prov_rw ...passed 00:06:38.585 Test: blob_thin_prov_rle ...passed 00:06:38.585 Test: blob_thin_prov_rw_iov ...passed 00:06:38.585 Test: blob_snapshot_rw ...passed 00:06:38.585 Test: blob_snapshot_rw_iov ...passed 00:06:38.844 Test: blob_inflate_rw ...passed 00:06:38.844 Test: blob_snapshot_freeze_io ...passed 00:06:39.103 Test: blob_operation_split_rw ...passed 00:06:39.103 Test: blob_operation_split_rw_iov ...passed 00:06:39.103 Test: blob_simultaneous_operations ...[2024-10-07 05:25:43.073935] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:39.103 [2024-10-07 05:25:43.074309] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:39.103 [2024-10-07 05:25:43.074886] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:39.103 [2024-10-07 05:25:43.075088] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:39.103 [2024-10-07 05:25:43.077605] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:39.103 [2024-10-07 05:25:43.077796] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:39.362 [2024-10-07 05:25:43.077937] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:39.362 [2024-10-07 05:25:43.078113] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:39.362 passed 00:06:39.362 Test: blob_persist_test ...passed 00:06:39.362 Test: blob_decouple_snapshot ...passed 00:06:39.362 Test: blob_seek_io_unit ...passed 00:06:39.362 Test: blob_nested_freezes ...passed 00:06:39.362 Suite: blob_blob_copy_noextent 00:06:39.362 Test: blob_write ...passed 00:06:39.362 Test: blob_read ...passed 00:06:39.362 Test: blob_rw_verify ...passed 00:06:39.621 Test: blob_rw_verify_iov_nomem ...passed 00:06:39.621 Test: blob_rw_iov_read_only ...passed 00:06:39.621 Test: blob_xattr ...passed 00:06:39.621 Test: blob_dirty_shutdown ...passed 00:06:39.621 Test: blob_is_degraded ...passed 00:06:39.621 Suite: blob_esnap_bs_copy_noextent 00:06:39.621 Test: blob_esnap_create ...passed 00:06:39.621 Test: blob_esnap_thread_add_remove ...passed 00:06:39.621 Test: blob_esnap_clone_snapshot ...passed 00:06:39.880 Test: blob_esnap_clone_inflate ...passed 00:06:39.880 Test: blob_esnap_clone_decouple ...passed 00:06:39.880 Test: blob_esnap_clone_reload ...passed 00:06:39.880 Test: blob_esnap_hotplug ...passed 00:06:39.880 Suite: blob_copy_extent 00:06:39.880 Test: blob_init ...[2024-10-07 05:25:43.704884] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:39.880 passed 00:06:39.880 Test: blob_thin_provision ...passed 00:06:39.880 Test: blob_read_only ...passed 00:06:39.880 Test: bs_load ...[2024-10-07 05:25:43.747466] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:39.880 passed 00:06:39.880 Test: bs_load_custom_cluster_size ...passed 00:06:39.880 Test: bs_load_after_failed_grow ...passed 00:06:39.880 Test: bs_cluster_sz ...[2024-10-07 05:25:43.770857] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:39.880 [2024-10-07 05:25:43.771114] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:39.880 [2024-10-07 05:25:43.771308] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:39.880 passed 00:06:39.880 Test: bs_resize_md ...passed 00:06:39.880 Test: bs_destroy ...passed 00:06:39.880 Test: bs_type ...passed 00:06:39.880 Test: bs_super_block ...passed 00:06:39.880 Test: bs_test_recover_cluster_count ...passed 00:06:39.880 Test: bs_grow_live ...passed 00:06:39.880 Test: bs_grow_live_no_space ...passed 00:06:40.138 Test: bs_test_grow ...passed 00:06:40.138 Test: blob_serialize_test ...passed 00:06:40.138 Test: super_block_crc ...passed 00:06:40.138 Test: blob_thin_prov_write_count_io ...passed 00:06:40.138 Test: bs_load_iter_test ...passed 00:06:40.138 Test: blob_relations ...[2024-10-07 05:25:43.930292] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:40.138 [2024-10-07 05:25:43.930633] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:40.138 [2024-10-07 05:25:43.931731] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:40.138 [2024-10-07 05:25:43.931932] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:40.138 passed 00:06:40.138 Test: blob_relations2 ...[2024-10-07 05:25:43.948152] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:40.138 [2024-10-07 05:25:43.948480] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:40.138 [2024-10-07 05:25:43.948584] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:40.138 [2024-10-07 05:25:43.948792] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:40.138 [2024-10-07 05:25:43.950299] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:40.138 [2024-10-07 05:25:43.950561] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:40.138 [2024-10-07 05:25:43.951151] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:40.138 [2024-10-07 05:25:43.951348] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:40.138 passed 00:06:40.138 Test: blob_relations3 ...passed 00:06:40.138 Test: blobstore_clean_power_failure ...passed 00:06:40.396 Test: blob_delete_snapshot_power_failure ...[2024-10-07 05:25:44.115698] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:40.396 [2024-10-07 05:25:44.128204] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:40.396 [2024-10-07 05:25:44.140790] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:40.396 [2024-10-07 05:25:44.141106] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:40.396 [2024-10-07 05:25:44.141202] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:40.396 [2024-10-07 05:25:44.157721] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:40.396 [2024-10-07 05:25:44.158018] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:40.396 [2024-10-07 05:25:44.158105] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:40.396 [2024-10-07 05:25:44.158266] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:40.396 [2024-10-07 05:25:44.170543] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:40.396 [2024-10-07 05:25:44.170823] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:40.396 [2024-10-07 05:25:44.170899] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:40.396 [2024-10-07 05:25:44.171057] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:40.396 [2024-10-07 05:25:44.183245] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:40.396 [2024-10-07 05:25:44.183535] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:40.396 [2024-10-07 05:25:44.196102] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:40.396 [2024-10-07 05:25:44.196408] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:40.396 [2024-10-07 05:25:44.209118] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:40.396 [2024-10-07 05:25:44.209400] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:40.396 passed 00:06:40.396 Test: blob_create_snapshot_power_failure ...[2024-10-07 05:25:44.247604] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:40.396 [2024-10-07 05:25:44.259815] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:40.396 [2024-10-07 05:25:44.284089] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:40.396 [2024-10-07 05:25:44.296767] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:40.396 passed 00:06:40.396 Test: blob_io_unit ...passed 00:06:40.396 Test: blob_io_unit_compatibility ...passed 00:06:40.654 Test: blob_ext_md_pages ...passed 00:06:40.654 Test: blob_esnap_io_4096_4096 ...passed 00:06:40.654 Test: blob_esnap_io_512_512 ...passed 00:06:40.654 Test: blob_esnap_io_4096_512 ...passed 00:06:40.654 Test: blob_esnap_io_512_4096 ...passed 00:06:40.654 Suite: blob_bs_copy_extent 00:06:40.654 Test: blob_open ...passed 00:06:40.654 Test: blob_create ...[2024-10-07 05:25:44.557630] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:40.654 passed 00:06:40.912 Test: blob_create_loop ...passed 00:06:40.912 Test: blob_create_fail ...[2024-10-07 05:25:44.661884] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:40.912 passed 00:06:40.912 Test: blob_create_internal ...passed 00:06:40.912 Test: blob_create_zero_extent ...passed 00:06:40.912 Test: blob_snapshot ...passed 00:06:40.912 Test: blob_clone ...passed 00:06:40.912 Test: blob_inflate ...[2024-10-07 05:25:44.831058] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:40.912 passed 00:06:40.912 Test: blob_delete ...passed 00:06:41.171 Test: blob_resize_test ...[2024-10-07 05:25:44.894204] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:41.171 passed 00:06:41.171 Test: channel_ops ...passed 00:06:41.171 Test: blob_super ...passed 00:06:41.171 Test: blob_rw_verify_iov ...passed 00:06:41.171 Test: blob_unmap ...passed 00:06:41.171 Test: blob_iter ...passed 00:06:41.171 Test: blob_parse_md ...passed 00:06:41.171 Test: bs_load_pending_removal ...passed 00:06:41.429 Test: bs_unload ...[2024-10-07 05:25:45.156357] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:41.429 passed 00:06:41.429 Test: bs_usable_clusters ...passed 00:06:41.429 Test: blob_crc ...[2024-10-07 05:25:45.216598] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:41.430 [2024-10-07 05:25:45.217017] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:41.430 passed 00:06:41.430 Test: blob_flags ...passed 00:06:41.430 Test: bs_version ...passed 00:06:41.430 Test: blob_set_xattrs_test ...[2024-10-07 05:25:45.307270] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:41.430 [2024-10-07 05:25:45.307673] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:41.430 passed 00:06:41.688 Test: blob_thin_prov_alloc ...passed 00:06:41.688 Test: blob_insert_cluster_msg_test ...passed 00:06:41.688 Test: blob_thin_prov_rw ...passed 00:06:41.688 Test: blob_thin_prov_rle ...passed 00:06:41.688 Test: blob_thin_prov_rw_iov ...passed 00:06:41.688 Test: blob_snapshot_rw ...passed 00:06:41.688 Test: blob_snapshot_rw_iov ...passed 00:06:41.947 Test: blob_inflate_rw ...passed 00:06:41.947 Test: blob_snapshot_freeze_io ...passed 00:06:42.206 Test: blob_operation_split_rw ...passed 00:06:42.206 Test: blob_operation_split_rw_iov ...passed 00:06:42.206 Test: blob_simultaneous_operations ...[2024-10-07 05:25:46.125308] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:42.206 [2024-10-07 05:25:46.125654] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:42.206 [2024-10-07 05:25:46.126129] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:42.206 [2024-10-07 05:25:46.126313] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:42.206 [2024-10-07 05:25:46.128815] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:42.206 [2024-10-07 05:25:46.129014] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:42.206 [2024-10-07 05:25:46.129164] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:42.206 [2024-10-07 05:25:46.129309] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:42.206 passed 00:06:42.206 Test: blob_persist_test ...passed 00:06:42.466 Test: blob_decouple_snapshot ...passed 00:06:42.466 Test: blob_seek_io_unit ...passed 00:06:42.466 Test: blob_nested_freezes ...passed 00:06:42.466 Suite: blob_blob_copy_extent 00:06:42.466 Test: blob_write ...passed 00:06:42.466 Test: blob_read ...passed 00:06:42.466 Test: blob_rw_verify ...passed 00:06:42.466 Test: blob_rw_verify_iov_nomem ...passed 00:06:42.466 Test: blob_rw_iov_read_only ...passed 00:06:42.725 Test: blob_xattr ...passed 00:06:42.725 Test: blob_dirty_shutdown ...passed 00:06:42.725 Test: blob_is_degraded ...passed 00:06:42.725 Suite: blob_esnap_bs_copy_extent 00:06:42.725 Test: blob_esnap_create ...passed 00:06:42.725 Test: blob_esnap_thread_add_remove ...passed 00:06:42.725 Test: blob_esnap_clone_snapshot ...passed 00:06:42.725 Test: blob_esnap_clone_inflate ...passed 00:06:42.725 Test: blob_esnap_clone_decouple ...passed 00:06:42.984 Test: blob_esnap_clone_reload ...passed 00:06:42.984 Test: blob_esnap_hotplug ...passed 00:06:42.984 00:06:42.984 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.984 suites 16 16 n/a 0 0 00:06:42.984 tests 348 348 348 0 0 00:06:42.984 asserts 92605 92605 92605 0 n/a 00:06:42.984 00:06:42.984 Elapsed time = 13.313 seconds 00:06:42.984 05:25:46 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:06:42.984 00:06:42.984 00:06:42.984 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.984 http://cunit.sourceforge.net/ 00:06:42.984 00:06:42.984 00:06:42.984 Suite: blob_bdev 00:06:42.984 Test: create_bs_dev ...passed 00:06:42.984 Test: create_bs_dev_ro ...[2024-10-07 05:25:46.836523] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:06:42.984 passed 00:06:42.984 Test: create_bs_dev_rw ...passed 00:06:42.984 Test: claim_bs_dev ...[2024-10-07 05:25:46.837603] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:06:42.984 passed 00:06:42.984 Test: claim_bs_dev_ro ...passed 00:06:42.984 Test: deferred_destroy_refs ...passed 00:06:42.984 Test: deferred_destroy_channels ...passed 00:06:42.984 Test: deferred_destroy_threads ...passed 00:06:42.984 00:06:42.984 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.984 suites 1 1 n/a 0 0 00:06:42.984 tests 8 8 8 0 0 00:06:42.984 asserts 119 119 119 0 n/a 00:06:42.984 00:06:42.984 Elapsed time = 0.001 seconds 00:06:42.984 05:25:46 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:06:42.984 00:06:42.984 00:06:42.984 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.984 http://cunit.sourceforge.net/ 00:06:42.984 00:06:42.984 00:06:42.984 Suite: tree 00:06:42.984 Test: blobfs_tree_op_test ...passed 00:06:42.984 00:06:42.984 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.984 suites 1 1 n/a 0 0 00:06:42.984 tests 1 1 1 0 0 00:06:42.984 asserts 27 27 27 0 n/a 00:06:42.984 00:06:42.984 Elapsed time = 0.000 seconds 00:06:42.984 05:25:46 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:06:42.984 00:06:42.984 00:06:42.984 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.984 http://cunit.sourceforge.net/ 00:06:42.984 00:06:42.984 00:06:42.984 Suite: blobfs_async_ut 00:06:43.243 Test: fs_init ...passed 00:06:43.243 Test: fs_open ...passed 00:06:43.243 Test: fs_create ...passed 00:06:43.243 Test: fs_truncate ...passed 00:06:43.243 Test: fs_rename ...[2024-10-07 05:25:47.045524] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:06:43.243 passed 00:06:43.243 Test: fs_rw_async ...passed 00:06:43.243 Test: fs_writev_readv_async ...passed 00:06:43.243 Test: tree_find_buffer_ut ...passed 00:06:43.243 Test: channel_ops ...passed 00:06:43.243 Test: channel_ops_sync ...passed 00:06:43.243 00:06:43.243 Run Summary: Type Total Ran Passed Failed Inactive 00:06:43.243 suites 1 1 n/a 0 0 00:06:43.243 tests 10 10 10 0 0 00:06:43.243 asserts 292 292 292 0 n/a 00:06:43.243 00:06:43.243 Elapsed time = 0.182 seconds 00:06:43.243 05:25:47 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:06:43.243 00:06:43.243 00:06:43.243 CUnit - A unit testing framework for C - Version 2.1-3 00:06:43.243 http://cunit.sourceforge.net/ 00:06:43.243 00:06:43.243 00:06:43.243 Suite: blobfs_sync_ut 00:06:43.502 Test: cache_read_after_write ...[2024-10-07 05:25:47.235495] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:06:43.502 passed 00:06:43.502 Test: file_length ...passed 00:06:43.502 Test: append_write_to_extend_blob ...passed 00:06:43.502 Test: partial_buffer ...passed 00:06:43.502 Test: cache_write_null_buffer ...passed 00:06:43.502 Test: fs_create_sync ...passed 00:06:43.502 Test: fs_rename_sync ...passed 00:06:43.502 Test: cache_append_no_cache ...passed 00:06:43.502 Test: fs_delete_file_without_close ...passed 00:06:43.503 00:06:43.503 Run Summary: Type Total Ran Passed Failed Inactive 00:06:43.503 suites 1 1 n/a 0 0 00:06:43.503 tests 9 9 9 0 0 00:06:43.503 asserts 345 345 345 0 n/a 00:06:43.503 00:06:43.503 Elapsed time = 0.376 seconds 00:06:43.503 05:25:47 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:06:43.503 00:06:43.503 00:06:43.503 CUnit - A unit testing framework for C - Version 2.1-3 00:06:43.503 http://cunit.sourceforge.net/ 00:06:43.503 00:06:43.503 00:06:43.503 Suite: blobfs_bdev_ut 00:06:43.503 Test: spdk_blobfs_bdev_detect_test ...[2024-10-07 05:25:47.425176] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:06:43.503 passed 00:06:43.503 Test: spdk_blobfs_bdev_create_test ...[2024-10-07 05:25:47.426258] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:06:43.503 passed 00:06:43.503 Test: spdk_blobfs_bdev_mount_test ...passed 00:06:43.503 00:06:43.503 Run Summary: Type Total Ran Passed Failed Inactive 00:06:43.503 suites 1 1 n/a 0 0 00:06:43.503 tests 3 3 3 0 0 00:06:43.503 asserts 9 9 9 0 n/a 00:06:43.503 00:06:43.503 Elapsed time = 0.001 seconds 00:06:43.503 00:06:43.503 real 0m14.191s 00:06:43.503 user 0m13.539s 00:06:43.503 sys 0m0.704s 00:06:43.503 05:25:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.503 05:25:47 -- common/autotest_common.sh@10 -- # set +x 00:06:43.503 ************************************ 00:06:43.503 END TEST unittest_blob_blobfs 00:06:43.503 ************************************ 00:06:43.762 05:25:47 -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:06:43.762 05:25:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:43.762 05:25:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:43.763 05:25:47 -- common/autotest_common.sh@10 -- # set +x 00:06:43.763 ************************************ 00:06:43.763 START TEST unittest_event 00:06:43.763 ************************************ 00:06:43.763 05:25:47 -- common/autotest_common.sh@1104 -- # unittest_event 00:06:43.763 05:25:47 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:06:43.763 00:06:43.763 00:06:43.763 CUnit - A unit testing framework for C - Version 2.1-3 00:06:43.763 http://cunit.sourceforge.net/ 00:06:43.763 00:06:43.763 00:06:43.763 Suite: app_suite 00:06:43.763 Test: test_spdk_app_parse_args ...app_ut [options] 00:06:43.763 options:app_ut: invalid option -- 'z' 00:06:43.763 00:06:43.763 -c, --config JSON config file (default none) 00:06:43.763 --json JSON config file (default none) 00:06:43.763 --json-ignore-init-errors 00:06:43.763 don't exit on invalid config entry 00:06:43.763 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:43.763 -g, --single-file-segments 00:06:43.763 force creating just one hugetlbfs file 00:06:43.763 -h, --help show this usage 00:06:43.763 -i, --shm-id shared memory ID (optional) 00:06:43.763 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:06:43.763 --lcores lcore to CPU mapping list. The list is in the format: 00:06:43.763 [<,lcores[@CPUs]>...] 00:06:43.763 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:43.763 Within the group, '-' is used for range separator, 00:06:43.763 ',' is used for single number separator. 00:06:43.763 '( )' can be omitted for single element group, 00:06:43.763 '@' can be omitted if cpus and lcores have the same value 00:06:43.763 -n, --mem-channels channel number of memory channels used for DPDK 00:06:43.763 -p, --main-core main (primary) core for DPDK 00:06:43.763 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:43.763 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:43.763 --disable-cpumask-locks Disable CPU core lock files. 00:06:43.763 --silence-noticelog disable notice level logging to stderr 00:06:43.763 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:43.763 -u, --no-pci disable PCI access 00:06:43.763 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:43.763 --max-delay maximum reactor delay (in microseconds) 00:06:43.763 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:43.763 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:43.763 -R, --huge-unlink unlink huge files after initialization 00:06:43.763 -v, --version print SPDK version 00:06:43.763 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:43.763 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:43.763 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:43.763 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:06:43.763 Tracepoints vary in size and can use more than one trace entry. 00:06:43.763 --rpcs-allowed comma-separated list of permitted RPCS 00:06:43.763 --env-context Opaque context for use of the env implementation 00:06:43.763 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:43.763 --no-huge run without using hugepages 00:06:43.763 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:06:43.763 -e, --tpoint-group [:] 00:06:43.763 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:06:43.763 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:06:43.763 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:06:43.763 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:06:43.763 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:06:43.763 app_ut: unrecognized option '--test-long-opt' 00:06:43.763 app_ut [options] 00:06:43.763 options: 00:06:43.763 -c, --config JSON config file (default none) 00:06:43.763 --json JSON config file (default none) 00:06:43.763 --json-ignore-init-errors 00:06:43.763 don't exit on invalid config entry 00:06:43.763 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:43.763 -g, --single-file-segments 00:06:43.763 force creating just one hugetlbfs file 00:06:43.763 -h, --help show this usage 00:06:43.763 -i, --shm-id shared memory ID (optional) 00:06:43.763 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:06:43.763 --lcores lcore to CPU mapping list. The list is in the format: 00:06:43.763 [<,lcores[@CPUs]>...] 00:06:43.763 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:43.763 Within the group, '-' is used for range separator, 00:06:43.763 ',' is used for single number separator. 00:06:43.763 '( )' can be omitted for single element group, 00:06:43.763 '@' can be omitted if cpus and lcores have the same value 00:06:43.763 -n, --mem-channels channel number of memory channels used for DPDK 00:06:43.763 -p, --main-core main (primary) core for DPDK 00:06:43.763 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:43.763 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:43.763 --disable-cpumask-locks Disable CPU core lock files. 00:06:43.763 --silence-noticelog disable notice level logging to stderr 00:06:43.763 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:43.763 -u, --no-pci disable PCI access 00:06:43.763 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:43.763 --max-delay maximum reactor delay (in microseconds) 00:06:43.763 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:43.763 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:43.763 -R, --huge-unlink unlink huge files after initialization 00:06:43.763 -v, --version print SPDK version 00:06:43.763 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:43.763 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:43.763 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:43.763 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:06:43.763 Tracepoints vary in size and can use more than one trace entry. 00:06:43.763 --rpcs-allowed comma-separated list of permitted RPCS 00:06:43.763 --env-context Opaque context for use of the env implementation 00:06:43.763 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:43.763 --no-huge run without using hugepages 00:06:43.763 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:06:43.763 -e, --tpoint-group [:] 00:06:43.763 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:06:43.763 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:06:43.763 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:06:43.763 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:06:43.763 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:06:43.763 [2024-10-07 05:25:47.518536] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1030:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:06:43.763 [2024-10-07 05:25:47.519386] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:06:43.763 app_ut [options] 00:06:43.763 options: 00:06:43.763 -c, --config JSON config file (default none) 00:06:43.763 --json JSON config file (default none) 00:06:43.763 --json-ignore-init-errors 00:06:43.763 don't exit on invalid config entry 00:06:43.763 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:43.763 -g, --single-file-segments 00:06:43.763 force creating just one hugetlbfs file 00:06:43.763 -h, --help show this usage 00:06:43.763 -i, --shm-id shared memory ID (optional) 00:06:43.763 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:06:43.763 --lcores lcore to CPU mapping list. The list is in the format: 00:06:43.763 [<,lcores[@CPUs]>...] 00:06:43.763 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:43.763 Within the group, '-' is used for range separator, 00:06:43.763 ',' is used for single number separator. 00:06:43.763 '( )' can be omitted for single element group, 00:06:43.763 '@' can be omitted if cpus and lcores have the same value 00:06:43.763 -n, --mem-channels channel number of memory channels used for DPDK 00:06:43.763 -p, --main-core main (primary) core for DPDK 00:06:43.763 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:43.763 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:43.763 --disable-cpumask-locks Disable CPU core lock files. 00:06:43.763 --silence-noticelog disable notice level logging to stderr 00:06:43.763 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:43.763 -u, --no-pci disable PCI access 00:06:43.763 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:43.763 --max-delay maximum reactor delay (in microseconds) 00:06:43.763 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:43.763 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:43.764 -R, --huge-unlink unlink huge files after initialization 00:06:43.764 -v, --version print SPDK version 00:06:43.764 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:43.764 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:43.764 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:43.764 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:06:43.764 Tracepoints vary in size and can use more than one trace entry. 00:06:43.764 --rpcs-allowed comma-separated list of permitted RPCS 00:06:43.764 --env-context Opaque context for use of the env implementation 00:06:43.764 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:43.764 --no-huge run without using hugepages 00:06:43.764 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:06:43.764 -e, --tpoint-group [:] 00:06:43.764 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:06:43.764 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:06:43.764 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:06:43.764 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:06:43.764 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:06:43.764 passed 00:06:43.764 00:06:43.764 Run Summary: Type Total Ran Passed Failed Inactive 00:06:43.764 suites 1 1 n/a 0 0 00:06:43.764 tests 1 1 1 0 0 00:06:43.764 asserts 8 8 8 0 n/a 00:06:43.764 00:06:43.764 Elapsed time = 0.003 seconds 00:06:43.764 [2024-10-07 05:25:47.522014] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:06:43.764 05:25:47 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:06:43.764 00:06:43.764 00:06:43.764 CUnit - A unit testing framework for C - Version 2.1-3 00:06:43.764 http://cunit.sourceforge.net/ 00:06:43.764 00:06:43.764 00:06:43.764 Suite: app_suite 00:06:43.764 Test: test_create_reactor ...passed 00:06:43.764 Test: test_init_reactors ...passed 00:06:43.764 Test: test_event_call ...passed 00:06:43.764 Test: test_schedule_thread ...passed 00:06:43.764 Test: test_reschedule_thread ...passed 00:06:43.764 Test: test_bind_thread ...passed 00:06:43.764 Test: test_for_each_reactor ...passed 00:06:43.764 Test: test_reactor_stats ...passed 00:06:43.764 Test: test_scheduler ...passed 00:06:43.764 Test: test_governor ...passed 00:06:43.764 00:06:43.764 Run Summary: Type Total Ran Passed Failed Inactive 00:06:43.764 suites 1 1 n/a 0 0 00:06:43.764 tests 10 10 10 0 0 00:06:43.764 asserts 344 344 344 0 n/a 00:06:43.764 00:06:43.764 Elapsed time = 0.015 seconds 00:06:43.764 00:06:43.764 real 0m0.101s 00:06:43.764 user 0m0.043s 00:06:43.764 sys 0m0.048s 00:06:43.764 05:25:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.764 05:25:47 -- common/autotest_common.sh@10 -- # set +x 00:06:43.764 ************************************ 00:06:43.764 END TEST unittest_event 00:06:43.764 ************************************ 00:06:43.764 05:25:47 -- unit/unittest.sh@233 -- # uname -s 00:06:43.764 05:25:47 -- unit/unittest.sh@233 -- # '[' Linux = Linux ']' 00:06:43.764 05:25:47 -- unit/unittest.sh@234 -- # run_test unittest_ftl unittest_ftl 00:06:43.764 05:25:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:43.764 05:25:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:43.764 05:25:47 -- common/autotest_common.sh@10 -- # set +x 00:06:43.764 ************************************ 00:06:43.764 START TEST unittest_ftl 00:06:43.764 ************************************ 00:06:43.764 05:25:47 -- common/autotest_common.sh@1104 -- # unittest_ftl 00:06:43.764 05:25:47 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:06:43.764 00:06:43.764 00:06:43.764 CUnit - A unit testing framework for C - Version 2.1-3 00:06:43.764 http://cunit.sourceforge.net/ 00:06:43.764 00:06:43.764 00:06:43.764 Suite: ftl_band_suite 00:06:43.764 Test: test_band_block_offset_from_addr_base ...passed 00:06:43.764 Test: test_band_block_offset_from_addr_offset ...passed 00:06:44.023 Test: test_band_addr_from_block_offset ...passed 00:06:44.023 Test: test_band_set_addr ...passed 00:06:44.023 Test: test_invalidate_addr ...passed 00:06:44.023 Test: test_next_xfer_addr ...passed 00:06:44.023 00:06:44.023 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.023 suites 1 1 n/a 0 0 00:06:44.023 tests 6 6 6 0 0 00:06:44.023 asserts 30356 30356 30356 0 n/a 00:06:44.023 00:06:44.023 Elapsed time = 0.178 seconds 00:06:44.023 05:25:47 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:06:44.023 00:06:44.023 00:06:44.023 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.023 http://cunit.sourceforge.net/ 00:06:44.023 00:06:44.023 00:06:44.023 Suite: ftl_bitmap 00:06:44.023 Test: test_ftl_bitmap_create ...[2024-10-07 05:25:47.920109] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:06:44.023 [2024-10-07 05:25:47.920433] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:06:44.023 passed 00:06:44.023 Test: test_ftl_bitmap_get ...passed 00:06:44.023 Test: test_ftl_bitmap_set ...passed 00:06:44.023 Test: test_ftl_bitmap_clear ...passed 00:06:44.023 Test: test_ftl_bitmap_find_first_set ...passed 00:06:44.023 Test: test_ftl_bitmap_find_first_clear ...passed 00:06:44.023 Test: test_ftl_bitmap_count_set ...passed 00:06:44.023 00:06:44.023 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.023 suites 1 1 n/a 0 0 00:06:44.023 tests 7 7 7 0 0 00:06:44.023 asserts 137 137 137 0 n/a 00:06:44.023 00:06:44.023 Elapsed time = 0.001 seconds 00:06:44.023 05:25:47 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:06:44.023 00:06:44.023 00:06:44.023 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.023 http://cunit.sourceforge.net/ 00:06:44.023 00:06:44.023 00:06:44.023 Suite: ftl_io_suite 00:06:44.023 Test: test_completion ...passed 00:06:44.023 Test: test_multiple_ios ...passed 00:06:44.023 00:06:44.023 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.023 suites 1 1 n/a 0 0 00:06:44.023 tests 2 2 2 0 0 00:06:44.023 asserts 47 47 47 0 n/a 00:06:44.023 00:06:44.023 Elapsed time = 0.003 seconds 00:06:44.023 05:25:47 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:06:44.023 00:06:44.023 00:06:44.023 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.023 http://cunit.sourceforge.net/ 00:06:44.023 00:06:44.023 00:06:44.023 Suite: ftl_mngt 00:06:44.023 Test: test_next_step ...passed 00:06:44.023 Test: test_continue_step ...passed 00:06:44.023 Test: test_get_func_and_step_cntx_alloc ...passed 00:06:44.023 Test: test_fail_step ...passed 00:06:44.023 Test: test_mngt_call_and_call_rollback ...passed 00:06:44.023 Test: test_nested_process_failure ...passed 00:06:44.023 00:06:44.023 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.023 suites 1 1 n/a 0 0 00:06:44.023 tests 6 6 6 0 0 00:06:44.023 asserts 176 176 176 0 n/a 00:06:44.023 00:06:44.023 Elapsed time = 0.001 seconds 00:06:44.283 05:25:47 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:06:44.283 00:06:44.283 00:06:44.283 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.283 http://cunit.sourceforge.net/ 00:06:44.283 00:06:44.283 00:06:44.283 Suite: ftl_mempool 00:06:44.283 Test: test_ftl_mempool_create ...passed 00:06:44.283 Test: test_ftl_mempool_get_put ...passed 00:06:44.283 00:06:44.283 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.283 suites 1 1 n/a 0 0 00:06:44.283 tests 2 2 2 0 0 00:06:44.283 asserts 36 36 36 0 n/a 00:06:44.283 00:06:44.283 Elapsed time = 0.000 seconds 00:06:44.283 05:25:48 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:06:44.283 00:06:44.283 00:06:44.283 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.283 http://cunit.sourceforge.net/ 00:06:44.283 00:06:44.283 00:06:44.283 Suite: ftl_addr64_suite 00:06:44.283 Test: test_addr_cached ...passed 00:06:44.283 00:06:44.283 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.283 suites 1 1 n/a 0 0 00:06:44.283 tests 1 1 1 0 0 00:06:44.283 asserts 1536 1536 1536 0 n/a 00:06:44.283 00:06:44.283 Elapsed time = 0.001 seconds 00:06:44.283 05:25:48 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:06:44.283 00:06:44.283 00:06:44.283 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.283 http://cunit.sourceforge.net/ 00:06:44.283 00:06:44.283 00:06:44.283 Suite: ftl_sb 00:06:44.283 Test: test_sb_crc_v2 ...passed 00:06:44.283 Test: test_sb_crc_v3 ...passed 00:06:44.283 Test: test_sb_v3_md_layout ...[2024-10-07 05:25:48.072197] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:06:44.283 [2024-10-07 05:25:48.072526] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:06:44.283 [2024-10-07 05:25:48.072568] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:06:44.283 [2024-10-07 05:25:48.072605] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:06:44.283 [2024-10-07 05:25:48.072636] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:06:44.283 [2024-10-07 05:25:48.072712] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:06:44.283 [2024-10-07 05:25:48.072760] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:06:44.283 [2024-10-07 05:25:48.072805] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:06:44.283 [2024-10-07 05:25:48.072880] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:06:44.283 [2024-10-07 05:25:48.072924] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:06:44.283 [2024-10-07 05:25:48.072970] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:06:44.283 passed 00:06:44.283 Test: test_sb_v5_md_layout ...passed 00:06:44.283 00:06:44.283 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.283 suites 1 1 n/a 0 0 00:06:44.283 tests 4 4 4 0 0 00:06:44.283 asserts 148 148 148 0 n/a 00:06:44.283 00:06:44.283 Elapsed time = 0.002 seconds 00:06:44.283 05:25:48 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:06:44.283 00:06:44.283 00:06:44.283 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.283 http://cunit.sourceforge.net/ 00:06:44.283 00:06:44.283 00:06:44.283 Suite: ftl_layout_upgrade 00:06:44.283 Test: test_l2p_upgrade ...passed 00:06:44.283 00:06:44.283 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.283 suites 1 1 n/a 0 0 00:06:44.283 tests 1 1 1 0 0 00:06:44.283 asserts 140 140 140 0 n/a 00:06:44.283 00:06:44.283 Elapsed time = 0.001 seconds 00:06:44.283 00:06:44.283 real 0m0.488s 00:06:44.283 user 0m0.247s 00:06:44.283 sys 0m0.223s 00:06:44.283 ************************************ 00:06:44.283 END TEST unittest_ftl 00:06:44.283 ************************************ 00:06:44.283 05:25:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.283 05:25:48 -- common/autotest_common.sh@10 -- # set +x 00:06:44.283 05:25:48 -- unit/unittest.sh@237 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:06:44.283 05:25:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:44.283 05:25:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.283 05:25:48 -- common/autotest_common.sh@10 -- # set +x 00:06:44.283 ************************************ 00:06:44.283 START TEST unittest_accel 00:06:44.283 ************************************ 00:06:44.283 05:25:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:06:44.283 00:06:44.283 00:06:44.283 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.283 http://cunit.sourceforge.net/ 00:06:44.283 00:06:44.283 00:06:44.283 Suite: accel_sequence 00:06:44.283 Test: test_sequence_fill_copy ...passed 00:06:44.283 Test: test_sequence_abort ...passed 00:06:44.283 Test: test_sequence_append_error ...passed 00:06:44.283 Test: test_sequence_completion_error ...[2024-10-07 05:25:48.214088] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f4c7f59c7c0 00:06:44.283 [2024-10-07 05:25:48.214461] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7f4c7f59c7c0 00:06:44.283 [2024-10-07 05:25:48.214596] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7f4c7f59c7c0 00:06:44.283 [2024-10-07 05:25:48.214675] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7f4c7f59c7c0 00:06:44.283 passed 00:06:44.283 Test: test_sequence_decompress ...passed 00:06:44.283 Test: test_sequence_reverse ...passed 00:06:44.283 Test: test_sequence_copy_elision ...passed 00:06:44.283 Test: test_sequence_accel_buffers ...passed 00:06:44.283 Test: test_sequence_memory_domain ...[2024-10-07 05:25:48.227233] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1728:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:06:44.283 passed 00:06:44.283 Test: test_sequence_module_memory_domain ...[2024-10-07 05:25:48.227462] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1767:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:06:44.283 passed 00:06:44.283 Test: test_sequence_crypto ...passed 00:06:44.283 Test: test_sequence_driver ...[2024-10-07 05:25:48.234872] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1875:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7f4c7e9747c0 using driver: ut 00:06:44.283 [2024-10-07 05:25:48.235022] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1939:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f4c7e9747c0 through driver: ut 00:06:44.283 passed 00:06:44.283 Test: test_sequence_same_iovs ...passed 00:06:44.283 Test: test_sequence_crc32 ...passed 00:06:44.283 Suite: accel 00:06:44.283 Test: test_spdk_accel_task_complete ...passed 00:06:44.283 Test: test_get_task ...passed 00:06:44.283 Test: test_spdk_accel_submit_copy ...passed 00:06:44.283 Test: test_spdk_accel_submit_dualcast ...passed 00:06:44.283 Test: test_spdk_accel_submit_compare ...passed 00:06:44.283 Test: test_spdk_accel_submit_fill ...passed 00:06:44.283 Test: test_spdk_accel_submit_crc32c ...passed 00:06:44.283 Test: test_spdk_accel_submit_crc32cv ...passed 00:06:44.283 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:06:44.283 Test: test_spdk_accel_submit_xor ...passed 00:06:44.283 Test: test_spdk_accel_module_find_by_name ...passed 00:06:44.283 Test: test_spdk_accel_module_register ...[2024-10-07 05:25:48.240503] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:06:44.283 [2024-10-07 05:25:48.240579] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:06:44.283 passed 00:06:44.283 00:06:44.283 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.283 suites 2 2 n/a 0 0 00:06:44.283 tests 26 26 26 0 0 00:06:44.283 asserts 831 831 831 0 n/a 00:06:44.283 00:06:44.283 Elapsed time = 0.038 seconds 00:06:44.598 00:06:44.598 real 0m0.081s 00:06:44.598 user 0m0.046s 00:06:44.598 sys 0m0.036s 00:06:44.598 05:25:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.598 ************************************ 00:06:44.598 END TEST unittest_accel 00:06:44.598 ************************************ 00:06:44.598 05:25:48 -- common/autotest_common.sh@10 -- # set +x 00:06:44.598 05:25:48 -- unit/unittest.sh@238 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:06:44.598 05:25:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:44.598 05:25:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.598 05:25:48 -- common/autotest_common.sh@10 -- # set +x 00:06:44.598 ************************************ 00:06:44.598 START TEST unittest_ioat 00:06:44.598 ************************************ 00:06:44.598 05:25:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:06:44.598 00:06:44.598 00:06:44.598 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.598 http://cunit.sourceforge.net/ 00:06:44.598 00:06:44.598 00:06:44.598 Suite: ioat 00:06:44.598 Test: ioat_state_check ...passed 00:06:44.598 00:06:44.598 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.598 suites 1 1 n/a 0 0 00:06:44.598 tests 1 1 1 0 0 00:06:44.598 asserts 32 32 32 0 n/a 00:06:44.598 00:06:44.598 Elapsed time = 0.000 seconds 00:06:44.598 00:06:44.598 real 0m0.029s 00:06:44.598 user 0m0.022s 00:06:44.598 sys 0m0.008s 00:06:44.598 05:25:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.598 05:25:48 -- common/autotest_common.sh@10 -- # set +x 00:06:44.598 ************************************ 00:06:44.598 END TEST unittest_ioat 00:06:44.598 ************************************ 00:06:44.598 05:25:48 -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:44.598 05:25:48 -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:06:44.598 05:25:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:44.598 05:25:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.598 05:25:48 -- common/autotest_common.sh@10 -- # set +x 00:06:44.598 ************************************ 00:06:44.598 START TEST unittest_idxd_user 00:06:44.598 ************************************ 00:06:44.598 05:25:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:06:44.598 00:06:44.598 00:06:44.598 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.598 http://cunit.sourceforge.net/ 00:06:44.598 00:06:44.598 00:06:44.598 Suite: idxd_user 00:06:44.598 Test: test_idxd_wait_cmd ...passed 00:06:44.598 Test: test_idxd_reset_dev ...[2024-10-07 05:25:48.415291] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:06:44.598 [2024-10-07 05:25:48.415531] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:06:44.598 [2024-10-07 05:25:48.415661] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:06:44.598 [2024-10-07 05:25:48.415704] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:06:44.598 passed 00:06:44.598 Test: test_idxd_group_config ...passed 00:06:44.598 Test: test_idxd_wq_config ...passed 00:06:44.598 00:06:44.598 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.598 suites 1 1 n/a 0 0 00:06:44.598 tests 4 4 4 0 0 00:06:44.598 asserts 20 20 20 0 n/a 00:06:44.598 00:06:44.598 Elapsed time = 0.001 seconds 00:06:44.598 00:06:44.598 real 0m0.030s 00:06:44.598 user 0m0.012s 00:06:44.598 sys 0m0.018s 00:06:44.598 05:25:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.598 05:25:48 -- common/autotest_common.sh@10 -- # set +x 00:06:44.598 ************************************ 00:06:44.598 END TEST unittest_idxd_user 00:06:44.598 ************************************ 00:06:44.599 05:25:48 -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:06:44.599 05:25:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:44.599 05:25:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.599 05:25:48 -- common/autotest_common.sh@10 -- # set +x 00:06:44.599 ************************************ 00:06:44.599 START TEST unittest_iscsi 00:06:44.599 ************************************ 00:06:44.599 05:25:48 -- common/autotest_common.sh@1104 -- # unittest_iscsi 00:06:44.599 05:25:48 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:06:44.599 00:06:44.599 00:06:44.599 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.599 http://cunit.sourceforge.net/ 00:06:44.599 00:06:44.599 00:06:44.599 Suite: conn_suite 00:06:44.599 Test: read_task_split_in_order_case ...passed 00:06:44.599 Test: read_task_split_reverse_order_case ...passed 00:06:44.599 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:06:44.599 Test: process_non_read_task_completion_test ...passed 00:06:44.599 Test: free_tasks_on_connection ...passed 00:06:44.599 Test: free_tasks_with_queued_datain ...passed 00:06:44.599 Test: abort_queued_datain_task_test ...passed 00:06:44.599 Test: abort_queued_datain_tasks_test ...passed 00:06:44.599 00:06:44.599 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.599 suites 1 1 n/a 0 0 00:06:44.599 tests 8 8 8 0 0 00:06:44.599 asserts 230 230 230 0 n/a 00:06:44.599 00:06:44.599 Elapsed time = 0.000 seconds 00:06:44.599 05:25:48 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:06:44.599 00:06:44.599 00:06:44.599 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.599 http://cunit.sourceforge.net/ 00:06:44.599 00:06:44.599 00:06:44.599 Suite: iscsi_suite 00:06:44.599 Test: param_negotiation_test ...passed 00:06:44.599 Test: list_negotiation_test ...passed 00:06:44.599 Test: parse_valid_test ...passed 00:06:44.599 Test: parse_invalid_test ...[2024-10-07 05:25:48.534939] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:06:44.599 [2024-10-07 05:25:48.535250] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:06:44.599 [2024-10-07 05:25:48.535310] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key 00:06:44.599 [2024-10-07 05:25:48.535388] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:06:44.599 [2024-10-07 05:25:48.535525] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256 00:06:44.599 [2024-10-07 05:25:48.535586] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:06:44.599 [2024-10-07 05:25:48.535733] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B 00:06:44.599 passed 00:06:44.599 00:06:44.599 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.599 suites 1 1 n/a 0 0 00:06:44.599 tests 4 4 4 0 0 00:06:44.599 asserts 161 161 161 0 n/a 00:06:44.599 00:06:44.599 Elapsed time = 0.005 seconds 00:06:44.599 05:25:48 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:06:44.599 00:06:44.599 00:06:44.599 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.599 http://cunit.sourceforge.net/ 00:06:44.599 00:06:44.599 00:06:44.599 Suite: iscsi_target_node_suite 00:06:44.599 Test: add_lun_test_cases ...passed 00:06:44.599 Test: allow_any_allowed ...passed 00:06:44.599 Test: allow_ipv6_allowed ...passed 00:06:44.599 Test: allow_ipv6_denied ...passed 00:06:44.599 Test: allow_ipv6_invalid ...passed 00:06:44.599 Test: allow_ipv4_allowed ...passed 00:06:44.599 Test: allow_ipv4_denied ...passed 00:06:44.599 Test: allow_ipv4_invalid ...passed 00:06:44.599 Test: node_access_allowed ...[2024-10-07 05:25:48.563822] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:06:44.599 [2024-10-07 05:25:48.564144] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:06:44.599 [2024-10-07 05:25:48.564256] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:06:44.599 [2024-10-07 05:25:48.564300] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:06:44.599 [2024-10-07 05:25:48.564343] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:06:44.599 passed 00:06:44.599 Test: node_access_denied_by_empty_netmask ...passed 00:06:44.599 Test: node_access_multi_initiator_groups_cases ...passed 00:06:44.599 Test: allow_iscsi_name_multi_maps_case ...passed 00:06:44.599 Test: chap_param_test_cases ...[2024-10-07 05:25:48.564742] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:06:44.599 [2024-10-07 05:25:48.564787] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:06:44.599 passed 00:06:44.599 00:06:44.599 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.599 suites 1 1 n/a 0 0 00:06:44.599 tests 13 13 13 0 0 00:06:44.599 asserts 50 50 50 0 n/a 00:06:44.599 00:06:44.599 Elapsed time = 0.001 seconds 00:06:44.599 [2024-10-07 05:25:48.564847] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:06:44.599 [2024-10-07 05:25:48.564882] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:06:44.599 [2024-10-07 05:25:48.564924] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:06:44.862 05:25:48 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:06:44.862 00:06:44.862 00:06:44.862 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.862 http://cunit.sourceforge.net/ 00:06:44.862 00:06:44.862 00:06:44.862 Suite: iscsi_suite 00:06:44.862 Test: op_login_check_target_test ...passed 00:06:44.862 Test: op_login_session_normal_test ...[2024-10-07 05:25:48.595266] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:06:44.862 [2024-10-07 05:25:48.595583] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:06:44.862 [2024-10-07 05:25:48.595657] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:06:44.862 [2024-10-07 05:25:48.595704] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:06:44.862 [2024-10-07 05:25:48.595763] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:06:44.862 passed 00:06:44.862 Test: maxburstlength_test ...[2024-10-07 05:25:48.595858] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:06:44.862 [2024-10-07 05:25:48.595969] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:06:44.862 [2024-10-07 05:25:48.596038] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:06:44.862 passed 00:06:44.862 Test: underflow_for_read_transfer_test ...passed 00:06:44.862 Test: underflow_for_zero_read_transfer_test ...[2024-10-07 05:25:48.596319] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:06:44.862 [2024-10-07 05:25:48.596380] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:06:44.862 passed 00:06:44.862 Test: underflow_for_request_sense_test ...passed 00:06:44.862 Test: underflow_for_check_condition_test ...passed 00:06:44.862 Test: add_transfer_task_test ...passed 00:06:44.862 Test: get_transfer_task_test ...passed 00:06:44.862 Test: del_transfer_task_test ...passed 00:06:44.862 Test: clear_all_transfer_tasks_test ...passed 00:06:44.862 Test: build_iovs_test ...passed 00:06:44.862 Test: build_iovs_with_md_test ...passed 00:06:44.862 Test: pdu_hdr_op_login_test ...[2024-10-07 05:25:48.597877] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:06:44.862 [2024-10-07 05:25:48.598023] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:06:44.862 passed 00:06:44.862 Test: pdu_hdr_op_text_test ...[2024-10-07 05:25:48.598121] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:06:44.862 [2024-10-07 05:25:48.598225] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:06:44.862 [2024-10-07 05:25:48.598336] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:06:44.862 passed 00:06:44.862 Test: pdu_hdr_op_logout_test ...passed 00:06:44.862 Test: pdu_hdr_op_scsi_test ...[2024-10-07 05:25:48.598383] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:06:44.862 [2024-10-07 05:25:48.598467] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:06:44.862 [2024-10-07 05:25:48.598682] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:06:44.862 [2024-10-07 05:25:48.598725] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:06:44.862 [2024-10-07 05:25:48.598786] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:06:44.862 [2024-10-07 05:25:48.598902] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:06:44.863 [2024-10-07 05:25:48.599003] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:06:44.863 passed 00:06:44.863 Test: pdu_hdr_op_task_mgmt_test ...[2024-10-07 05:25:48.599179] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:06:44.863 [2024-10-07 05:25:48.599280] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:06:44.863 [2024-10-07 05:25:48.599388] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:06:44.863 passed 00:06:44.863 Test: pdu_hdr_op_nopout_test ...[2024-10-07 05:25:48.599677] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:06:44.863 passed 00:06:44.863 Test: pdu_hdr_op_data_test ...[2024-10-07 05:25:48.599791] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:06:44.863 [2024-10-07 05:25:48.599838] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:06:44.863 [2024-10-07 05:25:48.599893] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:06:44.863 [2024-10-07 05:25:48.599945] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:06:44.863 [2024-10-07 05:25:48.600014] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:06:44.863 [2024-10-07 05:25:48.600095] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:06:44.863 [2024-10-07 05:25:48.600163] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:06:44.863 [2024-10-07 05:25:48.600240] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:06:44.863 [2024-10-07 05:25:48.600349] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:06:44.863 [2024-10-07 05:25:48.600394] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:06:44.863 passed 00:06:44.863 Test: empty_text_with_cbit_test ...passed 00:06:44.863 Test: pdu_payload_read_test ...[2024-10-07 05:25:48.602552] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:06:44.863 passed 00:06:44.863 Test: data_out_pdu_sequence_test ...passed 00:06:44.863 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:06:44.863 00:06:44.863 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.863 suites 1 1 n/a 0 0 00:06:44.863 tests 24 24 24 0 0 00:06:44.863 asserts 150253 150253 150253 0 n/a 00:06:44.863 00:06:44.863 Elapsed time = 0.017 seconds 00:06:44.863 05:25:48 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:06:44.863 00:06:44.863 00:06:44.863 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.863 http://cunit.sourceforge.net/ 00:06:44.863 00:06:44.863 00:06:44.863 Suite: init_grp_suite 00:06:44.863 Test: create_initiator_group_success_case ...passed 00:06:44.863 Test: find_initiator_group_success_case ...passed 00:06:44.863 Test: register_initiator_group_twice_case ...passed 00:06:44.863 Test: add_initiator_name_success_case ...passed 00:06:44.863 Test: add_initiator_name_fail_case ...passed 00:06:44.863 Test: delete_all_initiator_names_success_case ...[2024-10-07 05:25:48.643194] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:06:44.863 passed 00:06:44.863 Test: add_netmask_success_case ...passed 00:06:44.863 Test: add_netmask_fail_case ...[2024-10-07 05:25:48.643740] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:06:44.863 passed 00:06:44.863 Test: delete_all_netmasks_success_case ...passed 00:06:44.863 Test: initiator_name_overwrite_all_to_any_case ...passed 00:06:44.863 Test: netmask_overwrite_all_to_any_case ...passed 00:06:44.863 Test: add_delete_initiator_names_case ...passed 00:06:44.863 Test: add_duplicated_initiator_names_case ...passed 00:06:44.863 Test: delete_nonexisting_initiator_names_case ...passed 00:06:44.863 Test: add_delete_netmasks_case ...passed 00:06:44.863 Test: add_duplicated_netmasks_case ...passed 00:06:44.863 Test: delete_nonexisting_netmasks_case ...passed 00:06:44.863 00:06:44.863 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.863 suites 1 1 n/a 0 0 00:06:44.863 tests 17 17 17 0 0 00:06:44.863 asserts 108 108 108 0 n/a 00:06:44.863 00:06:44.863 Elapsed time = 0.001 seconds 00:06:44.863 05:25:48 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:06:44.863 00:06:44.863 00:06:44.863 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.863 http://cunit.sourceforge.net/ 00:06:44.863 00:06:44.863 00:06:44.863 Suite: portal_grp_suite 00:06:44.863 Test: portal_create_ipv4_normal_case ...passed 00:06:44.863 Test: portal_create_ipv6_normal_case ...passed 00:06:44.863 Test: portal_create_ipv4_wildcard_case ...passed 00:06:44.863 Test: portal_create_ipv6_wildcard_case ...passed 00:06:44.863 Test: portal_create_twice_case ...[2024-10-07 05:25:48.669866] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:06:44.863 passed 00:06:44.863 Test: portal_grp_register_unregister_case ...passed 00:06:44.863 Test: portal_grp_register_twice_case ...passed 00:06:44.863 Test: portal_grp_add_delete_case ...passed 00:06:44.863 Test: portal_grp_add_delete_twice_case ...passed 00:06:44.863 00:06:44.863 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.863 suites 1 1 n/a 0 0 00:06:44.863 tests 9 9 9 0 0 00:06:44.863 asserts 44 44 44 0 n/a 00:06:44.863 00:06:44.863 Elapsed time = 0.003 seconds 00:06:44.863 00:06:44.863 real 0m0.200s 00:06:44.863 user 0m0.141s 00:06:44.863 sys 0m0.059s 00:06:44.863 05:25:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.863 ************************************ 00:06:44.863 05:25:48 -- common/autotest_common.sh@10 -- # set +x 00:06:44.863 END TEST unittest_iscsi 00:06:44.863 ************************************ 00:06:44.863 05:25:48 -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:06:44.863 05:25:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:44.863 05:25:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.863 05:25:48 -- common/autotest_common.sh@10 -- # set +x 00:06:44.863 ************************************ 00:06:44.863 START TEST unittest_json 00:06:44.863 ************************************ 00:06:44.863 05:25:48 -- common/autotest_common.sh@1104 -- # unittest_json 00:06:44.863 05:25:48 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:06:44.863 00:06:44.863 00:06:44.863 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.863 http://cunit.sourceforge.net/ 00:06:44.863 00:06:44.863 00:06:44.863 Suite: json 00:06:44.863 Test: test_parse_literal ...passed 00:06:44.863 Test: test_parse_string_simple ...passed 00:06:44.863 Test: test_parse_string_control_chars ...passed 00:06:44.863 Test: test_parse_string_utf8 ...passed 00:06:44.863 Test: test_parse_string_escapes_twochar ...passed 00:06:44.863 Test: test_parse_string_escapes_unicode ...passed 00:06:44.863 Test: test_parse_number ...passed 00:06:44.863 Test: test_parse_array ...passed 00:06:44.863 Test: test_parse_object ...passed 00:06:44.863 Test: test_parse_nesting ...passed 00:06:44.863 Test: test_parse_comment ...passed 00:06:44.863 00:06:44.863 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.863 suites 1 1 n/a 0 0 00:06:44.863 tests 11 11 11 0 0 00:06:44.863 asserts 1516 1516 1516 0 n/a 00:06:44.863 00:06:44.863 Elapsed time = 0.001 seconds 00:06:44.863 05:25:48 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:06:44.863 00:06:44.863 00:06:44.863 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.863 http://cunit.sourceforge.net/ 00:06:44.863 00:06:44.863 00:06:44.863 Suite: json 00:06:44.863 Test: test_strequal ...passed 00:06:44.863 Test: test_num_to_uint16 ...passed 00:06:44.863 Test: test_num_to_int32 ...passed 00:06:44.863 Test: test_num_to_uint64 ...passed 00:06:44.863 Test: test_decode_object ...passed 00:06:44.863 Test: test_decode_array ...passed 00:06:44.863 Test: test_decode_bool ...passed 00:06:44.863 Test: test_decode_uint16 ...passed 00:06:44.863 Test: test_decode_int32 ...passed 00:06:44.863 Test: test_decode_uint32 ...passed 00:06:44.863 Test: test_decode_uint64 ...passed 00:06:44.863 Test: test_decode_string ...passed 00:06:44.863 Test: test_decode_uuid ...passed 00:06:44.863 Test: test_find ...passed 00:06:44.863 Test: test_find_array ...passed 00:06:44.863 Test: test_iterating ...passed 00:06:44.863 Test: test_free_object ...passed 00:06:44.863 00:06:44.863 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.863 suites 1 1 n/a 0 0 00:06:44.863 tests 17 17 17 0 0 00:06:44.863 asserts 236 236 236 0 n/a 00:06:44.863 00:06:44.863 Elapsed time = 0.001 seconds 00:06:44.863 05:25:48 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:06:44.863 00:06:44.863 00:06:44.863 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.863 http://cunit.sourceforge.net/ 00:06:44.863 00:06:44.863 00:06:44.863 Suite: json 00:06:44.863 Test: test_write_literal ...passed 00:06:44.863 Test: test_write_string_simple ...passed 00:06:44.863 Test: test_write_string_escapes ...passed 00:06:44.863 Test: test_write_string_utf16le ...passed 00:06:44.863 Test: test_write_number_int32 ...passed 00:06:44.863 Test: test_write_number_uint32 ...passed 00:06:44.863 Test: test_write_number_uint128 ...passed 00:06:44.863 Test: test_write_string_number_uint128 ...passed 00:06:44.863 Test: test_write_number_int64 ...passed 00:06:44.863 Test: test_write_number_uint64 ...passed 00:06:44.864 Test: test_write_number_double ...passed 00:06:44.864 Test: test_write_uuid ...passed 00:06:44.864 Test: test_write_array ...passed 00:06:44.864 Test: test_write_object ...passed 00:06:44.864 Test: test_write_nesting ...passed 00:06:44.864 Test: test_write_val ...passed 00:06:44.864 00:06:44.864 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.864 suites 1 1 n/a 0 0 00:06:44.864 tests 16 16 16 0 0 00:06:44.864 asserts 918 918 918 0 n/a 00:06:44.864 00:06:44.864 Elapsed time = 0.004 seconds 00:06:44.864 05:25:48 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:06:44.864 00:06:44.864 00:06:44.864 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.864 http://cunit.sourceforge.net/ 00:06:44.864 00:06:44.864 00:06:44.864 Suite: jsonrpc 00:06:44.864 Test: test_parse_request ...passed 00:06:44.864 Test: test_parse_request_streaming ...passed 00:06:44.864 00:06:44.864 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.864 suites 1 1 n/a 0 0 00:06:44.864 tests 2 2 2 0 0 00:06:44.864 asserts 289 289 289 0 n/a 00:06:44.864 00:06:44.864 Elapsed time = 0.004 seconds 00:06:45.123 00:06:45.123 real 0m0.110s 00:06:45.123 user 0m0.078s 00:06:45.123 sys 0m0.034s 00:06:45.123 05:25:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.123 ************************************ 00:06:45.123 END TEST unittest_json 00:06:45.123 ************************************ 00:06:45.123 05:25:48 -- common/autotest_common.sh@10 -- # set +x 00:06:45.123 05:25:48 -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:06:45.123 05:25:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:45.123 05:25:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.123 05:25:48 -- common/autotest_common.sh@10 -- # set +x 00:06:45.123 ************************************ 00:06:45.123 START TEST unittest_rpc 00:06:45.123 ************************************ 00:06:45.123 05:25:48 -- common/autotest_common.sh@1104 -- # unittest_rpc 00:06:45.123 05:25:48 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:06:45.123 00:06:45.123 00:06:45.123 CUnit - A unit testing framework for C - Version 2.1-3 00:06:45.123 http://cunit.sourceforge.net/ 00:06:45.123 00:06:45.123 00:06:45.123 Suite: rpc 00:06:45.123 Test: test_jsonrpc_handler ...passed 00:06:45.123 Test: test_spdk_rpc_is_method_allowed ...passed 00:06:45.123 Test: test_rpc_get_methods ...[2024-10-07 05:25:48.896890] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:06:45.123 passed 00:06:45.123 Test: test_rpc_spdk_get_version ...passed 00:06:45.123 Test: test_spdk_rpc_listen_close ...passed 00:06:45.123 00:06:45.123 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.123 suites 1 1 n/a 0 0 00:06:45.123 tests 5 5 5 0 0 00:06:45.123 asserts 20 20 20 0 n/a 00:06:45.123 00:06:45.123 Elapsed time = 0.000 seconds 00:06:45.123 00:06:45.123 real 0m0.028s 00:06:45.123 user 0m0.017s 00:06:45.123 sys 0m0.012s 00:06:45.123 05:25:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.123 05:25:48 -- common/autotest_common.sh@10 -- # set +x 00:06:45.123 ************************************ 00:06:45.123 END TEST unittest_rpc 00:06:45.123 ************************************ 00:06:45.123 05:25:48 -- unit/unittest.sh@245 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:06:45.123 05:25:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:45.123 05:25:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.123 05:25:48 -- common/autotest_common.sh@10 -- # set +x 00:06:45.123 ************************************ 00:06:45.123 START TEST unittest_notify 00:06:45.123 ************************************ 00:06:45.123 05:25:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:06:45.123 00:06:45.123 00:06:45.123 CUnit - A unit testing framework for C - Version 2.1-3 00:06:45.123 http://cunit.sourceforge.net/ 00:06:45.123 00:06:45.123 00:06:45.123 Suite: app_suite 00:06:45.123 Test: notify ...passed 00:06:45.123 00:06:45.123 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.123 suites 1 1 n/a 0 0 00:06:45.123 tests 1 1 1 0 0 00:06:45.123 asserts 13 13 13 0 n/a 00:06:45.123 00:06:45.123 Elapsed time = 0.000 seconds 00:06:45.123 00:06:45.123 real 0m0.029s 00:06:45.123 user 0m0.024s 00:06:45.123 sys 0m0.006s 00:06:45.123 05:25:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.123 05:25:48 -- common/autotest_common.sh@10 -- # set +x 00:06:45.123 ************************************ 00:06:45.123 END TEST unittest_notify 00:06:45.123 ************************************ 00:06:45.123 05:25:49 -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:06:45.123 05:25:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:45.123 05:25:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.123 05:25:49 -- common/autotest_common.sh@10 -- # set +x 00:06:45.123 ************************************ 00:06:45.123 START TEST unittest_nvme 00:06:45.123 ************************************ 00:06:45.123 05:25:49 -- common/autotest_common.sh@1104 -- # unittest_nvme 00:06:45.123 05:25:49 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:06:45.123 00:06:45.123 00:06:45.123 CUnit - A unit testing framework for C - Version 2.1-3 00:06:45.123 http://cunit.sourceforge.net/ 00:06:45.123 00:06:45.123 00:06:45.123 Suite: nvme 00:06:45.123 Test: test_opc_data_transfer ...passed 00:06:45.123 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:06:45.123 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:06:45.123 Test: test_trid_parse_and_compare ...[2024-10-07 05:25:49.056305] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:06:45.123 [2024-10-07 05:25:49.056567] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:06:45.123 passed 00:06:45.123 Test: test_trid_trtype_str ...passed 00:06:45.123 Test: test_trid_adrfam_str ...passed 00:06:45.123 Test: test_nvme_ctrlr_probe ...[2024-10-07 05:25:49.056658] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1179:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:06:45.123 [2024-10-07 05:25:49.056700] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:06:45.123 [2024-10-07 05:25:49.056735] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:06:45.123 [2024-10-07 05:25:49.056813] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:06:45.123 passed 00:06:45.123 Test: test_spdk_nvme_probe ...[2024-10-07 05:25:49.057019] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:06:45.123 [2024-10-07 05:25:49.057117] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:06:45.123 [2024-10-07 05:25:49.057153] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:06:45.123 passed 00:06:45.123 Test: test_spdk_nvme_connect ...[2024-10-07 05:25:49.057244] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:06:45.123 [2024-10-07 05:25:49.057286] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:06:45.123 [2024-10-07 05:25:49.057367] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:06:45.123 [2024-10-07 05:25:49.057682] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:06:45.123 [2024-10-07 05:25:49.057740] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:06:45.123 passed 00:06:45.123 Test: test_nvme_ctrlr_probe_internal ...passed 00:06:45.123 Test: test_nvme_init_controllers ...[2024-10-07 05:25:49.057852] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:06:45.123 [2024-10-07 05:25:49.057896] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:06:45.123 [2024-10-07 05:25:49.057974] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:06:45.123 passed 00:06:45.124 Test: test_nvme_driver_init ...[2024-10-07 05:25:49.058068] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:06:45.124 [2024-10-07 05:25:49.058107] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:06:45.383 [2024-10-07 05:25:49.172593] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:06:45.383 [2024-10-07 05:25:49.172742] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:06:45.383 passed 00:06:45.383 Test: test_spdk_nvme_detach ...passed 00:06:45.383 Test: test_nvme_completion_poll_cb ...passed 00:06:45.383 Test: test_nvme_user_copy_cmd_complete ...passed 00:06:45.383 Test: test_nvme_allocate_request_null ...passed 00:06:45.383 Test: test_nvme_allocate_request ...passed 00:06:45.383 Test: test_nvme_free_request ...passed 00:06:45.383 Test: test_nvme_allocate_request_user_copy ...passed 00:06:45.383 Test: test_nvme_robust_mutex_init_shared ...passed 00:06:45.383 Test: test_nvme_request_check_timeout ...passed 00:06:45.383 Test: test_nvme_wait_for_completion ...passed 00:06:45.383 Test: test_spdk_nvme_parse_func ...passed 00:06:45.383 Test: test_spdk_nvme_detach_async ...passed 00:06:45.383 Test: test_nvme_parse_addr ...[2024-10-07 05:25:49.173403] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:06:45.383 passed 00:06:45.383 00:06:45.383 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.383 suites 1 1 n/a 0 0 00:06:45.383 tests 25 25 25 0 0 00:06:45.383 asserts 326 326 326 0 n/a 00:06:45.383 00:06:45.383 Elapsed time = 0.006 seconds 00:06:45.383 05:25:49 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:06:45.383 00:06:45.383 00:06:45.383 CUnit - A unit testing framework for C - Version 2.1-3 00:06:45.383 http://cunit.sourceforge.net/ 00:06:45.383 00:06:45.383 00:06:45.383 Suite: nvme_ctrlr 00:06:45.384 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-10-07 05:25:49.204693] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:45.384 passed 00:06:45.384 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-10-07 05:25:49.206554] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:45.384 passed 00:06:45.384 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-10-07 05:25:49.207900] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:45.384 passed 00:06:45.384 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-10-07 05:25:49.209209] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:45.384 passed 00:06:45.384 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-10-07 05:25:49.210629] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:45.384 [2024-10-07 05:25:49.211856] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-10-07 05:25:49.213175] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-10-07 05:25:49.214600] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:06:45.384 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-10-07 05:25:49.217439] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:45.384 [2024-10-07 05:25:49.219864] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-10-07 05:25:49.221253] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:06:45.384 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-10-07 05:25:49.224221] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:45.384 [2024-10-07 05:25:49.225639] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-10-07 05:25:49.228226] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:06:45.384 Test: test_nvme_ctrlr_init_delay ...[2024-10-07 05:25:49.231162] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:45.384 passed 00:06:45.384 Test: test_alloc_io_qpair_rr_1 ...[2024-10-07 05:25:49.233002] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:45.384 [2024-10-07 05:25:49.233305] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:06:45.384 [2024-10-07 05:25:49.233696] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:06:45.384 [2024-10-07 05:25:49.233970] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:06:45.384 [2024-10-07 05:25:49.234227] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:06:45.384 passed 00:06:45.384 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:06:45.384 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:06:45.384 Test: test_alloc_io_qpair_wrr_1 ...[2024-10-07 05:25:49.235324] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:45.384 passed 00:06:45.384 Test: test_alloc_io_qpair_wrr_2 ...[2024-10-07 05:25:49.236059] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:45.384 [2024-10-07 05:25:49.236398] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:06:45.384 passed 00:06:45.384 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-10-07 05:25:49.237161] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4846:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:06:45.384 [2024-10-07 05:25:49.237534] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:06:45.384 [2024-10-07 05:25:49.237833] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4923:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:06:45.384 [2024-10-07 05:25:49.238081] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:06:45.384 passed 00:06:45.384 Test: test_nvme_ctrlr_fail ...[2024-10-07 05:25:49.238625] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:06:45.384 passed 00:06:45.384 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:06:45.384 Test: test_nvme_ctrlr_set_supported_features ...passed 00:06:45.384 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:06:45.384 Test: test_nvme_ctrlr_test_active_ns ...[2024-10-07 05:25:49.242895] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:45.644 passed 00:06:45.644 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:06:45.644 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:06:45.644 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:06:45.644 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-10-07 05:25:49.571369] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:45.644 passed 00:06:45.644 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-10-07 05:25:49.578753] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:45.644 passed 00:06:45.644 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-10-07 05:25:49.580063] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:45.644 [2024-10-07 05:25:49.580166] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2870:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:06:45.644 passed 00:06:45.644 Test: test_alloc_io_qpair_fail ...[2024-10-07 05:25:49.581412] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:45.644 passed 00:06:45.644 Test: test_nvme_ctrlr_add_remove_process ...passed 00:06:45.644 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:06:45.644 Test: test_nvme_ctrlr_set_state ...[2024-10-07 05:25:49.581561] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:06:45.644 passed 00:06:45.645 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-10-07 05:25:49.581719] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1465:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:06:45.645 [2024-10-07 05:25:49.581773] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:45.645 passed 00:06:45.645 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-10-07 05:25:49.606685] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:45.905 passed 00:06:45.905 Test: test_nvme_ctrlr_ns_mgmt ...[2024-10-07 05:25:49.653000] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:45.905 passed 00:06:45.905 Test: test_nvme_ctrlr_reset ...[2024-10-07 05:25:49.654749] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:45.905 passed 00:06:45.905 Test: test_nvme_ctrlr_aer_callback ...[2024-10-07 05:25:49.655404] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:45.905 passed 00:06:45.905 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-10-07 05:25:49.657076] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:45.905 passed 00:06:45.905 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:06:45.905 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:06:45.905 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-10-07 05:25:49.659045] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:45.905 passed 00:06:45.905 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:06:45.905 Test: test_nvme_ctrlr_ana_resize ...[2024-10-07 05:25:49.660634] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:45.905 passed 00:06:45.905 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:06:45.905 Test: test_nvme_transport_ctrlr_ready ...[2024-10-07 05:25:49.662356] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4016:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:06:45.905 [2024-10-07 05:25:49.662613] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4067:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:06:45.905 passed 00:06:45.905 Test: test_nvme_ctrlr_disable ...[2024-10-07 05:25:49.662850] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:45.905 passed 00:06:45.905 00:06:45.905 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.905 suites 1 1 n/a 0 0 00:06:45.905 tests 43 43 43 0 0 00:06:45.905 asserts 10418 10418 10418 0 n/a 00:06:45.905 00:06:45.905 Elapsed time = 0.407 seconds 00:06:45.905 05:25:49 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:06:45.905 00:06:45.905 00:06:45.905 CUnit - A unit testing framework for C - Version 2.1-3 00:06:45.905 http://cunit.sourceforge.net/ 00:06:45.905 00:06:45.905 00:06:45.905 Suite: nvme_ctrlr_cmd 00:06:45.905 Test: test_get_log_pages ...passed 00:06:45.905 Test: test_set_feature_cmd ...passed 00:06:45.905 Test: test_set_feature_ns_cmd ...passed 00:06:45.905 Test: test_get_feature_cmd ...passed 00:06:45.905 Test: test_get_feature_ns_cmd ...passed 00:06:45.905 Test: test_abort_cmd ...passed 00:06:45.905 Test: test_set_host_id_cmds ...[2024-10-07 05:25:49.703632] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:06:45.905 passed 00:06:45.905 Test: test_io_cmd_raw_no_payload_build ...passed 00:06:45.905 Test: test_io_raw_cmd ...passed 00:06:45.905 Test: test_io_raw_cmd_with_md ...passed 00:06:45.905 Test: test_namespace_attach ...passed 00:06:45.905 Test: test_namespace_detach ...passed 00:06:45.905 Test: test_namespace_create ...passed 00:06:45.905 Test: test_namespace_delete ...passed 00:06:45.905 Test: test_doorbell_buffer_config ...passed 00:06:45.905 Test: test_format_nvme ...passed 00:06:45.905 Test: test_fw_commit ...passed 00:06:45.905 Test: test_fw_image_download ...passed 00:06:45.905 Test: test_sanitize ...passed 00:06:45.905 Test: test_directive ...passed 00:06:45.905 Test: test_nvme_request_add_abort ...passed 00:06:45.905 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:06:45.905 Test: test_nvme_ctrlr_cmd_identify ...passed 00:06:45.905 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:06:45.905 00:06:45.905 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.905 suites 1 1 n/a 0 0 00:06:45.905 tests 24 24 24 0 0 00:06:45.905 asserts 198 198 198 0 n/a 00:06:45.905 00:06:45.905 Elapsed time = 0.001 seconds 00:06:45.905 05:25:49 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:06:45.905 00:06:45.905 00:06:45.905 CUnit - A unit testing framework for C - Version 2.1-3 00:06:45.905 http://cunit.sourceforge.net/ 00:06:45.905 00:06:45.905 00:06:45.906 Suite: nvme_ctrlr_cmd 00:06:45.906 Test: test_geometry_cmd ...passed 00:06:45.906 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:06:45.906 00:06:45.906 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.906 suites 1 1 n/a 0 0 00:06:45.906 tests 2 2 2 0 0 00:06:45.906 asserts 7 7 7 0 n/a 00:06:45.906 00:06:45.906 Elapsed time = 0.000 seconds 00:06:45.906 05:25:49 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:06:45.906 00:06:45.906 00:06:45.906 CUnit - A unit testing framework for C - Version 2.1-3 00:06:45.906 http://cunit.sourceforge.net/ 00:06:45.906 00:06:45.906 00:06:45.906 Suite: nvme 00:06:45.906 Test: test_nvme_ns_construct ...passed 00:06:45.906 Test: test_nvme_ns_uuid ...passed 00:06:45.906 Test: test_nvme_ns_csi ...passed 00:06:45.906 Test: test_nvme_ns_data ...passed 00:06:45.906 Test: test_nvme_ns_set_identify_data ...passed 00:06:45.906 Test: test_spdk_nvme_ns_get_values ...passed 00:06:45.906 Test: test_spdk_nvme_ns_is_active ...passed 00:06:45.906 Test: spdk_nvme_ns_supports ...passed 00:06:45.906 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:06:45.906 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:06:45.906 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:06:45.906 Test: test_nvme_ns_find_id_desc ...passed 00:06:45.906 00:06:45.906 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.906 suites 1 1 n/a 0 0 00:06:45.906 tests 12 12 12 0 0 00:06:45.906 asserts 83 83 83 0 n/a 00:06:45.906 00:06:45.906 Elapsed time = 0.000 seconds 00:06:45.906 05:25:49 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:06:45.906 00:06:45.906 00:06:45.906 CUnit - A unit testing framework for C - Version 2.1-3 00:06:45.906 http://cunit.sourceforge.net/ 00:06:45.906 00:06:45.906 00:06:45.906 Suite: nvme_ns_cmd 00:06:45.906 Test: split_test ...passed 00:06:45.906 Test: split_test2 ...passed 00:06:45.906 Test: split_test3 ...passed 00:06:45.906 Test: split_test4 ...passed 00:06:45.906 Test: test_nvme_ns_cmd_flush ...passed 00:06:45.906 Test: test_nvme_ns_cmd_dataset_management ...passed 00:06:45.906 Test: test_nvme_ns_cmd_copy ...passed 00:06:45.906 Test: test_io_flags ...[2024-10-07 05:25:49.780496] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:06:45.906 passed 00:06:45.906 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:06:45.906 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:06:45.906 Test: test_nvme_ns_cmd_reservation_register ...passed 00:06:45.906 Test: test_nvme_ns_cmd_reservation_release ...passed 00:06:45.906 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:06:45.906 Test: test_nvme_ns_cmd_reservation_report ...passed 00:06:45.906 Test: test_cmd_child_request ...passed 00:06:45.906 Test: test_nvme_ns_cmd_readv ...passed 00:06:45.906 Test: test_nvme_ns_cmd_read_with_md ...passed 00:06:45.906 Test: test_nvme_ns_cmd_writev ...[2024-10-07 05:25:49.781925] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 287:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:06:45.906 passed 00:06:45.906 Test: test_nvme_ns_cmd_write_with_md ...passed 00:06:45.906 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:06:45.906 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:06:45.906 Test: test_nvme_ns_cmd_comparev ...passed 00:06:45.906 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:06:45.906 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:06:45.906 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:06:45.906 Test: test_nvme_ns_cmd_setup_request ...passed 00:06:45.906 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:06:45.906 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:06:45.906 Test: test_spdk_nvme_ns_cmd_readv_ext ...passed 00:06:45.906 Test: test_nvme_ns_cmd_verify ...[2024-10-07 05:25:49.784168] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:06:45.906 [2024-10-07 05:25:49.784292] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:06:45.906 passed 00:06:45.906 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:06:45.906 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:06:45.906 00:06:45.906 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.906 suites 1 1 n/a 0 0 00:06:45.906 tests 32 32 32 0 0 00:06:45.906 asserts 550 550 550 0 n/a 00:06:45.906 00:06:45.906 Elapsed time = 0.005 seconds 00:06:45.906 05:25:49 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:06:45.906 00:06:45.906 00:06:45.906 CUnit - A unit testing framework for C - Version 2.1-3 00:06:45.906 http://cunit.sourceforge.net/ 00:06:45.906 00:06:45.906 00:06:45.906 Suite: nvme_ns_cmd 00:06:45.906 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:06:45.906 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:06:45.906 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:06:45.906 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:06:45.906 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:06:45.906 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:06:45.906 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:06:45.906 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:06:45.906 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:06:45.906 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:06:45.906 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:06:45.906 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:06:45.906 00:06:45.906 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.906 suites 1 1 n/a 0 0 00:06:45.906 tests 12 12 12 0 0 00:06:45.906 asserts 123 123 123 0 n/a 00:06:45.906 00:06:45.906 Elapsed time = 0.001 seconds 00:06:45.906 05:25:49 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:06:45.906 00:06:45.906 00:06:45.906 CUnit - A unit testing framework for C - Version 2.1-3 00:06:45.906 http://cunit.sourceforge.net/ 00:06:45.906 00:06:45.906 00:06:45.906 Suite: nvme_qpair 00:06:45.906 Test: test3 ...passed 00:06:45.906 Test: test_ctrlr_failed ...passed 00:06:45.906 Test: struct_packing ...passed 00:06:45.906 Test: test_nvme_qpair_process_completions ...[2024-10-07 05:25:49.840677] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:06:45.906 [2024-10-07 05:25:49.841015] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:06:45.906 [2024-10-07 05:25:49.841093] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:06:45.906 passed 00:06:45.906 Test: test_nvme_completion_is_retry ...passed 00:06:45.906 Test: test_get_status_string ...passed 00:06:45.907 Test: test_nvme_qpair_add_cmd_error_injection ...[2024-10-07 05:25:49.841203] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:06:45.907 passed 00:06:45.907 Test: test_nvme_qpair_submit_request ...passed 00:06:45.907 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:06:45.907 Test: test_nvme_qpair_manual_complete_request ...passed 00:06:45.907 Test: test_nvme_qpair_init_deinit ...passed 00:06:45.907 Test: test_nvme_get_sgl_print_info ...[2024-10-07 05:25:49.841667] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:06:45.907 passed 00:06:45.907 00:06:45.907 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.907 suites 1 1 n/a 0 0 00:06:45.907 tests 12 12 12 0 0 00:06:45.907 asserts 154 154 154 0 n/a 00:06:45.907 00:06:45.907 Elapsed time = 0.001 seconds 00:06:45.907 05:25:49 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:06:45.907 00:06:45.907 00:06:45.907 CUnit - A unit testing framework for C - Version 2.1-3 00:06:45.907 http://cunit.sourceforge.net/ 00:06:45.907 00:06:45.907 00:06:45.907 Suite: nvme_pcie 00:06:45.907 Test: test_prp_list_append ...[2024-10-07 05:25:49.870977] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:06:45.907 [2024-10-07 05:25:49.871256] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:06:45.907 [2024-10-07 05:25:49.871299] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:06:45.907 [2024-10-07 05:25:49.871552] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:06:45.907 [2024-10-07 05:25:49.871674] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:06:45.907 passed 00:06:45.907 Test: test_nvme_pcie_hotplug_monitor ...passed 00:06:45.907 Test: test_shadow_doorbell_update ...passed 00:06:45.907 Test: test_build_contig_hw_sgl_request ...passed 00:06:45.907 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:06:45.907 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:06:45.907 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:06:45.907 Test: test_nvme_pcie_qpair_build_contig_request ...[2024-10-07 05:25:49.871877] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:06:45.907 passed 00:06:45.907 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:06:45.907 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:06:45.907 Test: test_nvme_pcie_ctrlr_map_io_cmb ...[2024-10-07 05:25:49.871969] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:06:45.907 passed 00:06:45.907 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:06:45.907 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-10-07 05:25:49.872059] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:06:45.907 [2024-10-07 05:25:49.872119] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:06:45.907 passed 00:06:45.907 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:06:45.907 00:06:45.907 [2024-10-07 05:25:49.872168] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:06:45.907 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.907 suites 1 1 n/a 0 0 00:06:45.907 tests 14 14 14 0 0 00:06:45.907 asserts 235 235 235 0 n/a 00:06:45.907 00:06:45.907 Elapsed time = 0.001 seconds 00:06:46.167 05:25:49 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:06:46.167 00:06:46.167 00:06:46.167 CUnit - A unit testing framework for C - Version 2.1-3 00:06:46.167 http://cunit.sourceforge.net/ 00:06:46.167 00:06:46.167 00:06:46.167 Suite: nvme_ns_cmd 00:06:46.167 Test: nvme_poll_group_create_test ...passed 00:06:46.167 Test: nvme_poll_group_add_remove_test ...passed 00:06:46.167 Test: nvme_poll_group_process_completions ...passed 00:06:46.167 Test: nvme_poll_group_destroy_test ...passed 00:06:46.167 Test: nvme_poll_group_get_free_stats ...passed 00:06:46.167 00:06:46.167 Run Summary: Type Total Ran Passed Failed Inactive 00:06:46.167 suites 1 1 n/a 0 0 00:06:46.167 tests 5 5 5 0 0 00:06:46.167 asserts 75 75 75 0 n/a 00:06:46.167 00:06:46.167 Elapsed time = 0.000 seconds 00:06:46.167 05:25:49 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:06:46.167 00:06:46.167 00:06:46.167 CUnit - A unit testing framework for C - Version 2.1-3 00:06:46.167 http://cunit.sourceforge.net/ 00:06:46.167 00:06:46.167 00:06:46.167 Suite: nvme_quirks 00:06:46.167 Test: test_nvme_quirks_striping ...passed 00:06:46.167 00:06:46.167 Run Summary: Type Total Ran Passed Failed Inactive 00:06:46.167 suites 1 1 n/a 0 0 00:06:46.167 tests 1 1 1 0 0 00:06:46.167 asserts 5 5 5 0 n/a 00:06:46.167 00:06:46.167 Elapsed time = 0.000 seconds 00:06:46.167 05:25:49 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:06:46.167 00:06:46.167 00:06:46.167 CUnit - A unit testing framework for C - Version 2.1-3 00:06:46.167 http://cunit.sourceforge.net/ 00:06:46.167 00:06:46.167 00:06:46.167 Suite: nvme_tcp 00:06:46.167 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:06:46.167 Test: test_nvme_tcp_build_iovs ...passed 00:06:46.167 Test: test_nvme_tcp_build_sgl_request ...[2024-10-07 05:25:49.947479] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7fff3afcf9a0, and the iovcnt=16, remaining_size=28672 00:06:46.167 passed 00:06:46.167 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:06:46.167 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:06:46.167 Test: test_nvme_tcp_req_complete_safe ...passed 00:06:46.167 Test: test_nvme_tcp_req_get ...passed 00:06:46.167 Test: test_nvme_tcp_req_init ...passed 00:06:46.167 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:06:46.167 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:06:46.167 Test: test_nvme_tcp_qpair_set_recv_state ...[2024-10-07 05:25:49.948243] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff3afd16c0 is same with the state(6) to be set 00:06:46.167 passed 00:06:46.167 Test: test_nvme_tcp_alloc_reqs ...passed 00:06:46.167 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:06:46.167 Test: test_nvme_tcp_pdu_ch_handle ...[2024-10-07 05:25:49.948672] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff3afd0850 is same with the state(5) to be set 00:06:46.167 [2024-10-07 05:25:49.948760] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7fff3afd1380 00:06:46.167 [2024-10-07 05:25:49.948830] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:06:46.167 [2024-10-07 05:25:49.948942] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff3afd0d10 is same with the state(5) to be set 00:06:46.167 [2024-10-07 05:25:49.949029] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:06:46.167 [2024-10-07 05:25:49.949159] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff3afd0d10 is same with the state(5) to be set 00:06:46.168 [2024-10-07 05:25:49.949227] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:06:46.168 [2024-10-07 05:25:49.949278] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff3afd0d10 is same with the state(5) to be set 00:06:46.168 [2024-10-07 05:25:49.949355] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff3afd0d10 is same with the state(5) to be set 00:06:46.168 [2024-10-07 05:25:49.949428] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff3afd0d10 is same with the state(5) to be set 00:06:46.168 [2024-10-07 05:25:49.949519] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff3afd0d10 is same with the state(5) to be set 00:06:46.168 [2024-10-07 05:25:49.949597] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff3afd0d10 is same with the state(5) to be set 00:06:46.168 [2024-10-07 05:25:49.949671] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff3afd0d10 is same with the state(5) to be set 00:06:46.168 passed 00:06:46.168 Test: test_nvme_tcp_qpair_connect_sock ...[2024-10-07 05:25:49.949882] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:06:46.168 [2024-10-07 05:25:49.949960] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:06:46.168 [2024-10-07 05:25:49.950304] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:06:46.168 passed 00:06:46.168 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:06:46.168 Test: test_nvme_tcp_c2h_payload_handle ...[2024-10-07 05:25:49.950451] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7fff3afd0ec0): PDU Sequence Error 00:06:46.168 passed 00:06:46.168 Test: test_nvme_tcp_icresp_handle ...[2024-10-07 05:25:49.950603] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:06:46.168 [2024-10-07 05:25:49.950668] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1515:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:06:46.168 [2024-10-07 05:25:49.950720] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff3afd0860 is same with the state(5) to be set 00:06:46.168 [2024-10-07 05:25:49.950775] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:06:46.168 [2024-10-07 05:25:49.950837] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff3afd0860 is same with the state(5) to be set 00:06:46.168 [2024-10-07 05:25:49.950929] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff3afd0860 is same with the state(0) to be set 00:06:46.168 passed 00:06:46.168 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:06:46.168 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-10-07 05:25:49.951032] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7fff3afd1380): PDU Sequence Error 00:06:46.168 passed 00:06:46.168 Test: test_nvme_tcp_ctrlr_connect_qpair ...[2024-10-07 05:25:49.951152] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7fff3afcfb40 00:06:46.168 passed 00:06:46.168 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-10-07 05:25:49.951344] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7fff3afcf1c0, errno=0, rc=0 00:06:46.168 [2024-10-07 05:25:49.951436] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff3afcf1c0 is same with the state(5) to be set 00:06:46.168 [2024-10-07 05:25:49.951534] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff3afcf1c0 is same with the state(5) to be set 00:06:46.168 [2024-10-07 05:25:49.951630] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fff3afcf1c0 (0): Success 00:06:46.168 [2024-10-07 05:25:49.951708] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fff3afcf1c0 (0): Success 00:06:46.168 passed 00:06:46.168 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-10-07 05:25:50.069012] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:06:46.168 passed 00:06:46.168 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:06:46.168 Test: test_nvme_tcp_poll_group_get_stats ...[2024-10-07 05:25:50.069113] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:06:46.168 [2024-10-07 05:25:50.069338] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:46.168 passed 00:06:46.168 Test: test_nvme_tcp_ctrlr_construct ...[2024-10-07 05:25:50.069401] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:46.168 [2024-10-07 05:25:50.069669] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:06:46.168 [2024-10-07 05:25:50.069765] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:06:46.168 [2024-10-07 05:25:50.069915] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:06:46.168 [2024-10-07 05:25:50.070036] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:06:46.168 [2024-10-07 05:25:50.070207] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000001540 with addr=192.168.1.78, port=23 00:06:46.168 [2024-10-07 05:25:50.070333] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:06:46.168 passed 00:06:46.168 Test: test_nvme_tcp_qpair_submit_request ...[2024-10-07 05:25:50.070579] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000001a80, and the iovcnt=1, remaining_size=1024 00:06:46.168 [2024-10-07 05:25:50.070672] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:06:46.168 passed 00:06:46.168 00:06:46.168 Run Summary: Type Total Ran Passed Failed Inactive 00:06:46.168 suites 1 1 n/a 0 0 00:06:46.168 tests 27 27 27 0 0 00:06:46.168 asserts 624 624 624 0 n/a 00:06:46.168 00:06:46.168 Elapsed time = 0.123 seconds 00:06:46.168 05:25:50 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:06:46.168 00:06:46.168 00:06:46.168 CUnit - A unit testing framework for C - Version 2.1-3 00:06:46.168 http://cunit.sourceforge.net/ 00:06:46.168 00:06:46.168 00:06:46.168 Suite: nvme_transport 00:06:46.168 Test: test_nvme_get_transport ...passed 00:06:46.168 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:06:46.168 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:06:46.168 Test: test_nvme_transport_poll_group_add_remove ...passed 00:06:46.168 Test: test_ctrlr_get_memory_domains ...passed 00:06:46.168 00:06:46.168 Run Summary: Type Total Ran Passed Failed Inactive 00:06:46.168 suites 1 1 n/a 0 0 00:06:46.168 tests 5 5 5 0 0 00:06:46.168 asserts 28 28 28 0 n/a 00:06:46.168 00:06:46.168 Elapsed time = 0.000 seconds 00:06:46.168 05:25:50 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:06:46.168 00:06:46.168 00:06:46.168 CUnit - A unit testing framework for C - Version 2.1-3 00:06:46.168 http://cunit.sourceforge.net/ 00:06:46.168 00:06:46.168 00:06:46.168 Suite: nvme_io_msg 00:06:46.168 Test: test_nvme_io_msg_send ...passed 00:06:46.168 Test: test_nvme_io_msg_process ...passed 00:06:46.168 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:06:46.168 00:06:46.168 Run Summary: Type Total Ran Passed Failed Inactive 00:06:46.168 suites 1 1 n/a 0 0 00:06:46.168 tests 3 3 3 0 0 00:06:46.169 asserts 56 56 56 0 n/a 00:06:46.169 00:06:46.169 Elapsed time = 0.000 seconds 00:06:46.428 05:25:50 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:06:46.428 00:06:46.428 00:06:46.428 CUnit - A unit testing framework for C - Version 2.1-3 00:06:46.428 http://cunit.sourceforge.net/ 00:06:46.428 00:06:46.428 00:06:46.428 Suite: nvme_pcie_common 00:06:46.428 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-10-07 05:25:50.167754] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:06:46.428 passed 00:06:46.428 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:06:46.428 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:06:46.428 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-10-07 05:25:50.168624] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:06:46.428 [2024-10-07 05:25:50.168757] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:06:46.428 passed 00:06:46.428 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-10-07 05:25:50.168809] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:06:46.428 passed 00:06:46.428 Test: test_nvme_pcie_poll_group_get_stats ...[2024-10-07 05:25:50.169243] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:46.428 [2024-10-07 05:25:50.169304] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:46.428 passed 00:06:46.428 00:06:46.428 Run Summary: Type Total Ran Passed Failed Inactive 00:06:46.428 suites 1 1 n/a 0 0 00:06:46.428 tests 6 6 6 0 0 00:06:46.428 asserts 148 148 148 0 n/a 00:06:46.428 00:06:46.428 Elapsed time = 0.002 seconds 00:06:46.428 05:25:50 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:06:46.428 00:06:46.428 00:06:46.428 CUnit - A unit testing framework for C - Version 2.1-3 00:06:46.428 http://cunit.sourceforge.net/ 00:06:46.428 00:06:46.428 00:06:46.428 Suite: nvme_fabric 00:06:46.428 Test: test_nvme_fabric_prop_set_cmd ...passed 00:06:46.428 Test: test_nvme_fabric_prop_get_cmd ...passed 00:06:46.428 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:06:46.428 Test: test_nvme_fabric_discover_probe ...passed 00:06:46.428 Test: test_nvme_fabric_qpair_connect ...passed 00:06:46.428 00:06:46.429 [2024-10-07 05:25:50.194780] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:06:46.429 Run Summary: Type Total Ran Passed Failed Inactive 00:06:46.429 suites 1 1 n/a 0 0 00:06:46.429 tests 5 5 5 0 0 00:06:46.429 asserts 60 60 60 0 n/a 00:06:46.429 00:06:46.429 Elapsed time = 0.001 seconds 00:06:46.429 05:25:50 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:06:46.429 00:06:46.429 00:06:46.429 CUnit - A unit testing framework for C - Version 2.1-3 00:06:46.429 http://cunit.sourceforge.net/ 00:06:46.429 00:06:46.429 00:06:46.429 Suite: nvme_opal 00:06:46.429 Test: test_opal_nvme_security_recv_send_done ...passed 00:06:46.429 Test: test_opal_add_short_atom_header ...[2024-10-07 05:25:50.218043] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:06:46.429 passed 00:06:46.429 00:06:46.429 Run Summary: Type Total Ran Passed Failed Inactive 00:06:46.429 suites 1 1 n/a 0 0 00:06:46.429 tests 2 2 2 0 0 00:06:46.429 asserts 22 22 22 0 n/a 00:06:46.429 00:06:46.429 Elapsed time = 0.000 seconds 00:06:46.429 ************************************ 00:06:46.429 END TEST unittest_nvme 00:06:46.429 ************************************ 00:06:46.429 00:06:46.429 real 0m1.188s 00:06:46.429 user 0m0.620s 00:06:46.429 sys 0m0.409s 00:06:46.429 05:25:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.429 05:25:50 -- common/autotest_common.sh@10 -- # set +x 00:06:46.429 05:25:50 -- unit/unittest.sh@247 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:06:46.429 05:25:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:46.429 05:25:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.429 05:25:50 -- common/autotest_common.sh@10 -- # set +x 00:06:46.429 ************************************ 00:06:46.429 START TEST unittest_log 00:06:46.429 ************************************ 00:06:46.429 05:25:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:06:46.429 00:06:46.429 00:06:46.429 CUnit - A unit testing framework for C - Version 2.1-3 00:06:46.429 http://cunit.sourceforge.net/ 00:06:46.429 00:06:46.429 00:06:46.429 Suite: log 00:06:46.429 Test: log_test ...[2024-10-07 05:25:50.290707] log_ut.c: 54:log_test: *WARNING*: log warning unit test 00:06:46.429 [2024-10-07 05:25:50.291116] log_ut.c: 55:log_test: *DEBUG*: log test 00:06:46.429 log dump test: 00:06:46.429 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:06:46.429 passed 00:06:46.429 Test: deprecation ...spdk dump test: 00:06:46.429 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:06:46.429 spdk dump test: 00:06:46.429 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:06:46.429 00000010 65 20 63 68 61 72 73 e chars 00:06:47.369 passed 00:06:47.369 00:06:47.369 Run Summary: Type Total Ran Passed Failed Inactive 00:06:47.369 suites 1 1 n/a 0 0 00:06:47.369 tests 2 2 2 0 0 00:06:47.369 asserts 73 73 73 0 n/a 00:06:47.369 00:06:47.369 Elapsed time = 0.001 seconds 00:06:47.369 00:06:47.369 real 0m1.028s 00:06:47.369 user 0m0.008s 00:06:47.369 sys 0m0.020s 00:06:47.369 05:25:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.369 05:25:51 -- common/autotest_common.sh@10 -- # set +x 00:06:47.369 ************************************ 00:06:47.369 END TEST unittest_log 00:06:47.369 ************************************ 00:06:47.369 05:25:51 -- unit/unittest.sh@248 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:06:47.369 05:25:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:47.369 05:25:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.369 05:25:51 -- common/autotest_common.sh@10 -- # set +x 00:06:47.630 ************************************ 00:06:47.630 START TEST unittest_lvol 00:06:47.630 ************************************ 00:06:47.630 05:25:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:06:47.630 00:06:47.630 00:06:47.630 CUnit - A unit testing framework for C - Version 2.1-3 00:06:47.631 http://cunit.sourceforge.net/ 00:06:47.631 00:06:47.631 00:06:47.631 Suite: lvol 00:06:47.631 Test: lvs_init_unload_success ...[2024-10-07 05:25:51.373424] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:06:47.631 passed 00:06:47.631 Test: lvs_init_destroy_success ...[2024-10-07 05:25:51.374356] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:06:47.631 passed 00:06:47.631 Test: lvs_init_opts_success ...passed 00:06:47.631 Test: lvs_unload_lvs_is_null_fail ...[2024-10-07 05:25:51.374781] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:06:47.631 passed 00:06:47.631 Test: lvs_names ...[2024-10-07 05:25:51.374996] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:06:47.631 [2024-10-07 05:25:51.375183] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:06:47.631 [2024-10-07 05:25:51.375511] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:06:47.631 passed 00:06:47.631 Test: lvol_create_destroy_success ...passed 00:06:47.631 Test: lvol_create_fail ...[2024-10-07 05:25:51.376235] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:06:47.631 [2024-10-07 05:25:51.376474] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:06:47.631 passed 00:06:47.631 Test: lvol_destroy_fail ...[2024-10-07 05:25:51.376942] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:06:47.631 passed 00:06:47.631 Test: lvol_close ...passed 00:06:47.631 Test: lvol_resize ...[2024-10-07 05:25:51.377294] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:06:47.631 [2024-10-07 05:25:51.377472] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:06:47.631 passed 00:06:47.631 Test: lvol_set_read_only ...passed 00:06:47.631 Test: test_lvs_load ...[2024-10-07 05:25:51.378393] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:06:47.631 [2024-10-07 05:25:51.378582] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:06:47.631 passed 00:06:47.631 Test: lvols_load ...[2024-10-07 05:25:51.378980] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:06:47.631 [2024-10-07 05:25:51.379215] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:06:47.631 passed 00:06:47.631 Test: lvol_open ...passed 00:06:47.631 Test: lvol_snapshot ...passed 00:06:47.631 Test: lvol_snapshot_fail ...[2024-10-07 05:25:51.380026] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:06:47.631 passed 00:06:47.631 Test: lvol_clone ...passed 00:06:47.631 Test: lvol_clone_fail ...[2024-10-07 05:25:51.380683] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:06:47.631 passed 00:06:47.631 Test: lvol_iter_clones ...passed 00:06:47.631 Test: lvol_refcnt ...[2024-10-07 05:25:51.381281] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol ab0faa9a-975d-4508-9adf-07f8f47c06d5 because it is still open 00:06:47.631 passed 00:06:47.631 Test: lvol_names ...[2024-10-07 05:25:51.381615] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:06:47.631 [2024-10-07 05:25:51.381827] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:06:47.631 [2024-10-07 05:25:51.382138] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:06:47.631 passed 00:06:47.631 Test: lvol_create_thin_provisioned ...passed 00:06:47.631 Test: lvol_rename ...[2024-10-07 05:25:51.382797] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:06:47.631 [2024-10-07 05:25:51.383011] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:06:47.631 passed 00:06:47.631 Test: lvs_rename ...[2024-10-07 05:25:51.383422] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:06:47.631 passed 00:06:47.631 Test: lvol_inflate ...[2024-10-07 05:25:51.383790] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:06:47.631 passed 00:06:47.631 Test: lvol_decouple_parent ...[2024-10-07 05:25:51.384149] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:06:47.631 passed 00:06:47.631 Test: lvol_get_xattr ...passed 00:06:47.631 Test: lvol_esnap_reload ...passed 00:06:47.631 Test: lvol_esnap_create_bad_args ...[2024-10-07 05:25:51.384710] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:06:47.631 [2024-10-07 05:25:51.384871] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:06:47.631 [2024-10-07 05:25:51.385045] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:06:47.631 [2024-10-07 05:25:51.385304] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:06:47.631 [2024-10-07 05:25:51.385611] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:06:47.631 passed 00:06:47.631 Test: lvol_esnap_create_delete ...passed 00:06:47.631 Test: lvol_esnap_load_esnaps ...[2024-10-07 05:25:51.386096] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:06:47.631 passed 00:06:47.631 Test: lvol_esnap_missing ...[2024-10-07 05:25:51.386381] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:06:47.631 [2024-10-07 05:25:51.386561] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:06:47.631 passed 00:06:47.631 Test: lvol_esnap_hotplug ... 00:06:47.631 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:06:47.631 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:06:47.631 [2024-10-07 05:25:51.387361] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol d7932dca-5e67-4e9a-960f-9cf58741c35d: failed to create esnap bs_dev: error -12 00:06:47.631 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:06:47.631 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:06:47.631 [2024-10-07 05:25:51.387702] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol d9bcf33f-9186-4abc-8a76-fb808a9ea149: failed to create esnap bs_dev: error -12 00:06:47.631 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:06:47.631 [2024-10-07 05:25:51.387980] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol f180521d-6ece-417f-ab80-569020469658: failed to create esnap bs_dev: error -12 00:06:47.631 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:06:47.631 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:06:47.631 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:06:47.631 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:06:47.631 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:06:47.631 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:06:47.631 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:06:47.631 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:06:47.631 passed 00:06:47.631 Test: lvol_get_by ...passed 00:06:47.631 00:06:47.631 Run Summary: Type Total Ran Passed Failed Inactive 00:06:47.631 suites 1 1 n/a 0 0 00:06:47.631 tests 34 34 34 0 0 00:06:47.631 asserts 1439 1439 1439 0 n/a 00:06:47.631 00:06:47.631 Elapsed time = 0.012 seconds 00:06:47.631 00:06:47.631 real 0m0.047s 00:06:47.631 user 0m0.016s 00:06:47.631 sys 0m0.027s 00:06:47.631 05:25:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.631 ************************************ 00:06:47.631 END TEST unittest_lvol 00:06:47.631 05:25:51 -- common/autotest_common.sh@10 -- # set +x 00:06:47.631 ************************************ 00:06:47.631 05:25:51 -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:47.631 05:25:51 -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:06:47.631 05:25:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:47.631 05:25:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.631 05:25:51 -- common/autotest_common.sh@10 -- # set +x 00:06:47.631 ************************************ 00:06:47.631 START TEST unittest_nvme_rdma 00:06:47.631 ************************************ 00:06:47.631 05:25:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:06:47.631 00:06:47.631 00:06:47.631 CUnit - A unit testing framework for C - Version 2.1-3 00:06:47.631 http://cunit.sourceforge.net/ 00:06:47.631 00:06:47.631 00:06:47.631 Suite: nvme_rdma 00:06:47.631 Test: test_nvme_rdma_build_sgl_request ...[2024-10-07 05:25:51.475204] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:06:47.631 [2024-10-07 05:25:51.475495] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1628:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:06:47.631 passed 00:06:47.631 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:06:47.631 Test: test_nvme_rdma_build_contig_request ...[2024-10-07 05:25:51.475606] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1684:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:06:47.631 [2024-10-07 05:25:51.475682] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1565:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:06:47.631 passed 00:06:47.631 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:06:47.631 Test: test_nvme_rdma_create_reqs ...[2024-10-07 05:25:51.475796] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:06:47.631 passed 00:06:47.631 Test: test_nvme_rdma_create_rsps ...[2024-10-07 05:25:51.476105] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:06:47.631 passed 00:06:47.632 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-10-07 05:25:51.476283] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:06:47.632 passed 00:06:47.632 Test: test_nvme_rdma_poller_create ...[2024-10-07 05:25:51.476350] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:06:47.632 passed 00:06:47.632 Test: test_nvme_rdma_qpair_process_cm_event ...[2024-10-07 05:25:51.476502] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:06:47.632 passed 00:06:47.632 Test: test_nvme_rdma_ctrlr_construct ...passed 00:06:47.632 Test: test_nvme_rdma_req_put_and_get ...passed 00:06:47.632 Test: test_nvme_rdma_req_init ...passed 00:06:47.632 Test: test_nvme_rdma_validate_cm_event ...[2024-10-07 05:25:51.476763] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:06:47.632 [2024-10-07 05:25:51.476828] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:06:47.632 passed 00:06:47.632 Test: test_nvme_rdma_qpair_init ...passed 00:06:47.632 Test: test_nvme_rdma_qpair_submit_request ...passed 00:06:47.632 Test: test_nvme_rdma_memory_domain ...[2024-10-07 05:25:51.476993] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:06:47.632 passed 00:06:47.632 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:06:47.632 Test: test_rdma_get_memory_translation ...[2024-10-07 05:25:51.477093] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:06:47.632 passed 00:06:47.632 Test: test_get_rdma_qpair_from_wc ...passed 00:06:47.632 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:06:47.632 Test: test_nvme_rdma_poll_group_get_stats ...[2024-10-07 05:25:51.477159] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:06:47.632 passed 00:06:47.632 Test: test_nvme_rdma_qpair_set_poller ...[2024-10-07 05:25:51.477244] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:47.632 [2024-10-07 05:25:51.477290] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:47.632 [2024-10-07 05:25:51.477398] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:06:47.632 [2024-10-07 05:25:51.477449] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:06:47.632 [2024-10-07 05:25:51.477493] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffeac69fb40 on poll group 0x60b0000001a0 00:06:47.632 [2024-10-07 05:25:51.477555] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:06:47.632 [2024-10-07 05:25:51.477601] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:06:47.632 [2024-10-07 05:25:51.477641] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffeac69fb40 on poll group 0x60b0000001a0 00:06:47.632 passed 00:06:47.632 00:06:47.632 Run Summary: Type Total Ran Passed Failed Inactive 00:06:47.632 suites 1 1 n/a 0 0 00:06:47.632 tests 22 22 22 0 0 00:06:47.632 asserts 412 412 412 0 n/a 00:06:47.632 00:06:47.632 Elapsed time = 0.003 seconds 00:06:47.632 [2024-10-07 05:25:51.477714] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:06:47.632 00:06:47.632 real 0m0.033s 00:06:47.632 user 0m0.021s 00:06:47.632 sys 0m0.011s 00:06:47.632 05:25:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.632 05:25:51 -- common/autotest_common.sh@10 -- # set +x 00:06:47.632 ************************************ 00:06:47.632 END TEST unittest_nvme_rdma 00:06:47.632 ************************************ 00:06:47.632 05:25:51 -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:06:47.632 05:25:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:47.632 05:25:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.632 05:25:51 -- common/autotest_common.sh@10 -- # set +x 00:06:47.632 ************************************ 00:06:47.632 START TEST unittest_nvmf_transport 00:06:47.632 ************************************ 00:06:47.632 05:25:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:06:47.632 00:06:47.632 00:06:47.632 CUnit - A unit testing framework for C - Version 2.1-3 00:06:47.632 http://cunit.sourceforge.net/ 00:06:47.632 00:06:47.632 00:06:47.632 Suite: nvmf 00:06:47.632 Test: test_spdk_nvmf_transport_create ...[2024-10-07 05:25:51.569431] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:06:47.632 [2024-10-07 05:25:51.569921] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:06:47.632 [2024-10-07 05:25:51.570139] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:06:47.632 [2024-10-07 05:25:51.570429] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 254:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:06:47.632 passed 00:06:47.632 Test: test_nvmf_transport_poll_group_create ...passed 00:06:47.632 Test: test_spdk_nvmf_transport_opts_init ...[2024-10-07 05:25:51.571440] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:06:47.632 [2024-10-07 05:25:51.571735] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:06:47.632 [2024-10-07 05:25:51.571922] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:06:47.632 passed 00:06:47.632 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:06:47.632 00:06:47.632 Run Summary: Type Total Ran Passed Failed Inactive 00:06:47.632 suites 1 1 n/a 0 0 00:06:47.632 tests 4 4 4 0 0 00:06:47.632 asserts 49 49 49 0 n/a 00:06:47.632 00:06:47.632 Elapsed time = 0.002 seconds 00:06:47.632 00:06:47.632 real 0m0.043s 00:06:47.632 user 0m0.037s 00:06:47.632 sys 0m0.004s 00:06:47.632 05:25:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.632 05:25:51 -- common/autotest_common.sh@10 -- # set +x 00:06:47.632 ************************************ 00:06:47.632 END TEST unittest_nvmf_transport 00:06:47.632 ************************************ 00:06:47.892 05:25:51 -- unit/unittest.sh@252 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:06:47.892 05:25:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:47.892 05:25:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.892 05:25:51 -- common/autotest_common.sh@10 -- # set +x 00:06:47.892 ************************************ 00:06:47.892 START TEST unittest_rdma 00:06:47.892 ************************************ 00:06:47.892 05:25:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:06:47.892 00:06:47.892 00:06:47.892 CUnit - A unit testing framework for C - Version 2.1-3 00:06:47.892 http://cunit.sourceforge.net/ 00:06:47.892 00:06:47.892 00:06:47.892 Suite: rdma_common 00:06:47.892 Test: test_spdk_rdma_pd ...[2024-10-07 05:25:51.658196] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:06:47.892 [2024-10-07 05:25:51.658602] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:06:47.892 passed 00:06:47.892 00:06:47.892 Run Summary: Type Total Ran Passed Failed Inactive 00:06:47.892 suites 1 1 n/a 0 0 00:06:47.892 tests 1 1 1 0 0 00:06:47.892 asserts 31 31 31 0 n/a 00:06:47.892 00:06:47.892 Elapsed time = 0.001 seconds 00:06:47.892 00:06:47.892 real 0m0.032s 00:06:47.892 user 0m0.010s 00:06:47.892 sys 0m0.022s 00:06:47.892 05:25:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.892 05:25:51 -- common/autotest_common.sh@10 -- # set +x 00:06:47.892 ************************************ 00:06:47.892 END TEST unittest_rdma 00:06:47.892 ************************************ 00:06:47.892 05:25:51 -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:47.893 05:25:51 -- unit/unittest.sh@256 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:06:47.893 05:25:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:47.893 05:25:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.893 05:25:51 -- common/autotest_common.sh@10 -- # set +x 00:06:47.893 ************************************ 00:06:47.893 START TEST unittest_nvme_cuse 00:06:47.893 ************************************ 00:06:47.893 05:25:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:06:47.893 00:06:47.893 00:06:47.893 CUnit - A unit testing framework for C - Version 2.1-3 00:06:47.893 http://cunit.sourceforge.net/ 00:06:47.893 00:06:47.893 00:06:47.893 Suite: nvme_cuse 00:06:47.893 Test: test_cuse_nvme_submit_io_read_write ...passed 00:06:47.893 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:06:47.893 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:06:47.893 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:06:47.893 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:06:47.893 Test: test_cuse_nvme_submit_io ...[2024-10-07 05:25:51.750211] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 656:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:06:47.893 passed 00:06:47.893 Test: test_cuse_nvme_reset ...[2024-10-07 05:25:51.750952] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 341:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:06:47.893 passed 00:06:47.893 Test: test_nvme_cuse_stop ...passed 00:06:47.893 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:06:47.893 00:06:47.893 Run Summary: Type Total Ran Passed Failed Inactive 00:06:47.893 suites 1 1 n/a 0 0 00:06:47.893 tests 9 9 9 0 0 00:06:47.893 asserts 121 121 121 0 n/a 00:06:47.893 00:06:47.893 Elapsed time = 0.002 seconds 00:06:47.893 00:06:47.893 real 0m0.030s 00:06:47.893 user 0m0.015s 00:06:47.893 sys 0m0.012s 00:06:47.893 05:25:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.893 05:25:51 -- common/autotest_common.sh@10 -- # set +x 00:06:47.893 ************************************ 00:06:47.893 END TEST unittest_nvme_cuse 00:06:47.893 ************************************ 00:06:47.893 05:25:51 -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:06:47.893 05:25:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:47.893 05:25:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.893 05:25:51 -- common/autotest_common.sh@10 -- # set +x 00:06:47.893 ************************************ 00:06:47.893 START TEST unittest_nvmf 00:06:47.893 ************************************ 00:06:47.893 05:25:51 -- common/autotest_common.sh@1104 -- # unittest_nvmf 00:06:47.893 05:25:51 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:06:47.893 00:06:47.893 00:06:47.893 CUnit - A unit testing framework for C - Version 2.1-3 00:06:47.893 http://cunit.sourceforge.net/ 00:06:47.893 00:06:47.893 00:06:47.893 Suite: nvmf 00:06:47.893 Test: test_get_log_page ...[2024-10-07 05:25:51.848681] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:06:47.893 passed 00:06:47.893 Test: test_process_fabrics_cmd ...passed 00:06:47.893 Test: test_connect ...[2024-10-07 05:25:51.850263] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:06:47.893 [2024-10-07 05:25:51.850525] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:06:47.893 [2024-10-07 05:25:51.850624] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:06:47.893 [2024-10-07 05:25:51.850704] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:06:47.893 [2024-10-07 05:25:51.850889] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:06:47.893 [2024-10-07 05:25:51.850959] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 786:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:06:47.893 [2024-10-07 05:25:51.851155] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 792:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:06:47.893 [2024-10-07 05:25:51.851249] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:06:47.893 [2024-10-07 05:25:51.851417] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:06:47.893 [2024-10-07 05:25:51.851624] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:06:47.893 [2024-10-07 05:25:51.852113] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:06:47.893 [2024-10-07 05:25:51.852299] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 599:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:06:47.893 [2024-10-07 05:25:51.852488] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 606:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:06:47.893 [2024-10-07 05:25:51.852660] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:06:47.893 [2024-10-07 05:25:51.852843] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:06:47.893 [2024-10-07 05:25:51.853111] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:06:47.893 passed 00:06:47.893 Test: test_get_ns_id_desc_list ...passed 00:06:47.893 Test: test_identify_ns ...[2024-10-07 05:25:51.853553] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:47.893 [2024-10-07 05:25:51.854023] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:06:47.893 [2024-10-07 05:25:51.854333] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:06:47.893 passed 00:06:47.893 Test: test_identify_ns_iocs_specific ...[2024-10-07 05:25:51.854652] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:47.893 [2024-10-07 05:25:51.855196] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:47.893 passed 00:06:47.893 Test: test_reservation_write_exclusive ...passed 00:06:47.893 Test: test_reservation_exclusive_access ...passed 00:06:47.893 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:06:47.893 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:06:47.893 Test: test_reservation_notification_log_page ...passed 00:06:47.893 Test: test_get_dif_ctx ...passed 00:06:47.893 Test: test_set_get_features ...[2024-10-07 05:25:51.856197] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:06:47.893 [2024-10-07 05:25:51.856315] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:06:47.893 [2024-10-07 05:25:51.856410] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:06:47.893 [2024-10-07 05:25:51.856517] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:06:47.893 passed 00:06:47.893 Test: test_identify_ctrlr ...passed 00:06:47.893 Test: test_identify_ctrlr_iocs_specific ...passed 00:06:47.893 Test: test_custom_admin_cmd ...passed 00:06:47.893 Test: test_fused_compare_and_write ...[2024-10-07 05:25:51.857325] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:06:47.893 [2024-10-07 05:25:51.857441] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:06:47.893 [2024-10-07 05:25:51.857528] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:06:47.893 passed 00:06:47.893 Test: test_multi_async_event_reqs ...passed 00:06:47.893 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:06:47.893 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:06:47.893 Test: test_multi_async_events ...passed 00:06:47.893 Test: test_rae ...passed 00:06:47.893 Test: test_nvmf_ctrlr_create_destruct ...passed 00:06:47.893 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:06:47.893 Test: test_spdk_nvmf_request_zcopy_start ...[2024-10-07 05:25:51.858401] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:06:47.893 passed 00:06:47.893 Test: test_zcopy_read ...passed 00:06:47.893 Test: test_zcopy_write ...passed 00:06:47.893 Test: test_nvmf_property_set ...passed 00:06:47.893 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-10-07 05:25:51.858880] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:06:47.893 [2024-10-07 05:25:51.859034] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:06:47.893 passed 00:06:47.893 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-10-07 05:25:51.859125] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:06:47.893 [2024-10-07 05:25:51.859209] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:06:47.893 [2024-10-07 05:25:51.859269] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:06:47.893 passed 00:06:47.893 00:06:47.893 Run Summary: Type Total Ran Passed Failed Inactive 00:06:47.893 suites 1 1 n/a 0 0 00:06:47.893 tests 30 30 30 0 0 00:06:47.893 asserts 885 885 885 0 n/a 00:06:47.893 00:06:47.893 Elapsed time = 0.011 seconds 00:06:48.154 05:25:51 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:06:48.154 00:06:48.154 00:06:48.154 CUnit - A unit testing framework for C - Version 2.1-3 00:06:48.154 http://cunit.sourceforge.net/ 00:06:48.154 00:06:48.154 00:06:48.154 Suite: nvmf 00:06:48.154 Test: test_get_rw_params ...passed 00:06:48.154 Test: test_lba_in_range ...passed 00:06:48.154 Test: test_get_dif_ctx ...passed 00:06:48.154 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:06:48.154 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-10-07 05:25:51.887292] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:06:48.155 [2024-10-07 05:25:51.887604] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:06:48.155 passed 00:06:48.155 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-10-07 05:25:51.887705] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:06:48.155 [2024-10-07 05:25:51.887773] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:06:48.155 [2024-10-07 05:25:51.887860] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:06:48.155 passed 00:06:48.155 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-10-07 05:25:51.888027] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:06:48.155 passed 00:06:48.155 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:06:48.155 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...[2024-10-07 05:25:51.888081] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:06:48.155 [2024-10-07 05:25:51.888152] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:06:48.155 [2024-10-07 05:25:51.888192] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:06:48.155 passed 00:06:48.155 00:06:48.155 Run Summary: Type Total Ran Passed Failed Inactive 00:06:48.155 suites 1 1 n/a 0 0 00:06:48.155 tests 9 9 9 0 0 00:06:48.155 asserts 157 157 157 0 n/a 00:06:48.155 00:06:48.155 Elapsed time = 0.001 seconds 00:06:48.155 05:25:51 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:06:48.155 00:06:48.155 00:06:48.155 CUnit - A unit testing framework for C - Version 2.1-3 00:06:48.155 http://cunit.sourceforge.net/ 00:06:48.155 00:06:48.155 00:06:48.155 Suite: nvmf 00:06:48.155 Test: test_discovery_log ...passed 00:06:48.155 Test: test_discovery_log_with_filters ...passed 00:06:48.155 00:06:48.155 Run Summary: Type Total Ran Passed Failed Inactive 00:06:48.155 suites 1 1 n/a 0 0 00:06:48.155 tests 2 2 2 0 0 00:06:48.155 asserts 238 238 238 0 n/a 00:06:48.155 00:06:48.155 Elapsed time = 0.003 seconds 00:06:48.155 05:25:51 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:06:48.155 00:06:48.155 00:06:48.155 CUnit - A unit testing framework for C - Version 2.1-3 00:06:48.155 http://cunit.sourceforge.net/ 00:06:48.155 00:06:48.155 00:06:48.155 Suite: nvmf 00:06:48.155 Test: nvmf_test_create_subsystem ...[2024-10-07 05:25:51.948702] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:06:48.155 [2024-10-07 05:25:51.949045] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:06:48.155 [2024-10-07 05:25:51.949155] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:06:48.155 [2024-10-07 05:25:51.949203] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:06:48.155 [2024-10-07 05:25:51.949241] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:06:48.155 [2024-10-07 05:25:51.949289] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:06:48.155 [2024-10-07 05:25:51.949421] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:06:48.155 [2024-10-07 05:25:51.949612] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:06:48.155 [2024-10-07 05:25:51.949745] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:06:48.155 [2024-10-07 05:25:51.949795] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:06:48.155 passed 00:06:48.155 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-10-07 05:25:51.949832] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:06:48.155 [2024-10-07 05:25:51.950028] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:06:48.155 [2024-10-07 05:25:51.950150] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1774:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:06:48.155 passed 00:06:48.155 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:06:48.155 Test: test_reservation_register ...[2024-10-07 05:25:51.950415] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:48.155 [2024-10-07 05:25:51.950630] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2881:nvmf_ns_reservation_register: *ERROR*: No registrant 00:06:48.155 passed 00:06:48.155 Test: test_reservation_register_with_ptpl ...passed 00:06:48.155 Test: test_reservation_acquire_preempt_1 ...[2024-10-07 05:25:51.951731] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:48.155 passed 00:06:48.155 Test: test_reservation_acquire_release_with_ptpl ...passed 00:06:48.155 Test: test_reservation_release ...[2024-10-07 05:25:51.953463] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:48.155 passed 00:06:48.155 Test: test_reservation_unregister_notification ...[2024-10-07 05:25:51.953730] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:48.155 passed 00:06:48.155 Test: test_reservation_release_notification ...[2024-10-07 05:25:51.954034] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:48.155 passed 00:06:48.155 Test: test_reservation_release_notification_write_exclusive ...[2024-10-07 05:25:51.954233] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:48.155 passed 00:06:48.155 Test: test_reservation_clear_notification ...[2024-10-07 05:25:51.954468] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:48.155 passed 00:06:48.155 Test: test_reservation_preempt_notification ...[2024-10-07 05:25:51.954717] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:48.155 passed 00:06:48.155 Test: test_spdk_nvmf_ns_event ...passed 00:06:48.155 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:06:48.155 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:06:48.155 Test: test_spdk_nvmf_subsystem_add_host ...[2024-10-07 05:25:51.955415] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 260:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:06:48.155 [2024-10-07 05:25:51.955551] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:06:48.155 passed 00:06:48.155 Test: test_nvmf_ns_reservation_report ...passed 00:06:48.155 Test: test_nvmf_nqn_is_valid ...[2024-10-07 05:25:51.955723] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3186:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:06:48.155 [2024-10-07 05:25:51.955800] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:06:48.155 [2024-10-07 05:25:51.955850] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:32b813ea-1ce6-4897-a955-869f45c6c5e": uuid is not the correct length 00:06:48.155 [2024-10-07 05:25:51.955926] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:06:48.155 passed 00:06:48.155 Test: test_nvmf_ns_reservation_restore ...passed 00:06:48.155 Test: test_nvmf_subsystem_state_change ...[2024-10-07 05:25:51.956025] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2380:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:06:48.155 passed 00:06:48.155 Test: test_nvmf_reservation_custom_ops ...passed 00:06:48.155 00:06:48.155 Run Summary: Type Total Ran Passed Failed Inactive 00:06:48.155 suites 1 1 n/a 0 0 00:06:48.155 tests 22 22 22 0 0 00:06:48.155 asserts 407 407 407 0 n/a 00:06:48.155 00:06:48.155 Elapsed time = 0.008 seconds 00:06:48.155 05:25:51 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:06:48.155 00:06:48.155 00:06:48.155 CUnit - A unit testing framework for C - Version 2.1-3 00:06:48.155 http://cunit.sourceforge.net/ 00:06:48.155 00:06:48.155 00:06:48.155 Suite: nvmf 00:06:48.155 Test: test_nvmf_tcp_create ...[2024-10-07 05:25:52.012339] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 730:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:06:48.155 passed 00:06:48.155 Test: test_nvmf_tcp_destroy ...passed 00:06:48.155 Test: test_nvmf_tcp_poll_group_create ...passed 00:06:48.155 Test: test_nvmf_tcp_send_c2h_data ...passed 00:06:48.155 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:06:48.155 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:06:48.155 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:06:48.155 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-10-07 05:25:52.121992] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:48.155 [2024-10-07 05:25:52.122089] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe291bd890 is same with the state(5) to be set 00:06:48.155 [2024-10-07 05:25:52.122224] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe291bd890 is same with the state(5) to be set 00:06:48.155 passed 00:06:48.155 Test: test_nvmf_tcp_send_capsule_resp_pdu ...[2024-10-07 05:25:52.122287] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:48.156 [2024-10-07 05:25:52.122326] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe291bd890 is same with the state(5) to be set 00:06:48.156 passed 00:06:48.156 Test: test_nvmf_tcp_icreq_handle ...[2024-10-07 05:25:52.122418] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:06:48.156 [2024-10-07 05:25:52.122543] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:48.156 [2024-10-07 05:25:52.122617] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe291bd890 is same with the state(5) to be set 00:06:48.156 [2024-10-07 05:25:52.122656] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:06:48.156 [2024-10-07 05:25:52.122716] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe291bd890 is same with the state(5) to be set 00:06:48.156 [2024-10-07 05:25:52.122773] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:48.156 [2024-10-07 05:25:52.122815] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe291bd890 is same with the state(5) to be set 00:06:48.156 passed 00:06:48.156 Test: test_nvmf_tcp_check_xfer_type ...passed 00:06:48.156 Test: test_nvmf_tcp_invalid_sgl ...[2024-10-07 05:25:52.122865] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:06:48.156 [2024-10-07 05:25:52.122922] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe291bd890 is same with the state(5) to be set 00:06:48.156 [2024-10-07 05:25:52.123022] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2484:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:06:48.156 [2024-10-07 05:25:52.123082] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:48.156 [2024-10-07 05:25:52.123127] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe291bd890 is same with the state(5) to be set 00:06:48.156 passed 00:06:48.156 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-10-07 05:25:52.123174] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2216:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffe291be5f0 00:06:48.156 [2024-10-07 05:25:52.123271] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:48.156 [2024-10-07 05:25:52.123330] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe291bdd50 is same with the state(5) to be set 00:06:48.156 [2024-10-07 05:25:52.123379] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2273:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffe291bdd50 00:06:48.156 [2024-10-07 05:25:52.123434] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:48.156 [2024-10-07 05:25:52.123490] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe291bdd50 is same with the state(5) to be set 00:06:48.156 [2024-10-07 05:25:52.123540] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2226:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:06:48.156 [2024-10-07 05:25:52.123615] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:48.156 [2024-10-07 05:25:52.123688] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe291bdd50 is same with the state(5) to be set 00:06:48.156 [2024-10-07 05:25:52.123751] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2265:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:06:48.156 [2024-10-07 05:25:52.123793] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:48.156 [2024-10-07 05:25:52.123836] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe291bdd50 is same with the state(5) to be set 00:06:48.156 [2024-10-07 05:25:52.123901] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:48.156 [2024-10-07 05:25:52.123955] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe291bdd50 is same with the state(5) to be set 00:06:48.156 [2024-10-07 05:25:52.124045] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:48.156 [2024-10-07 05:25:52.124095] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe291bdd50 is same with the state(5) to be set 00:06:48.156 [2024-10-07 05:25:52.124176] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:48.156 [2024-10-07 05:25:52.124220] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe291bdd50 is same with the state(5) to be set 00:06:48.156 [2024-10-07 05:25:52.124260] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:48.156 [2024-10-07 05:25:52.124306] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe291bdd50 is same with the state(5) to be set 00:06:48.156 [2024-10-07 05:25:52.124390] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:48.156 [2024-10-07 05:25:52.124436] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe291bdd50 is same with the state(5) to be set 00:06:48.156 [2024-10-07 05:25:52.124493] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:48.156 [2024-10-07 05:25:52.124553] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe291bdd50 is same with the state(5) to be set 00:06:48.156 passed 00:06:48.416 Test: test_nvmf_tcp_tls_add_remove_credentials ...passed 00:06:48.416 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-10-07 05:25:52.149696] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:06:48.416 passed 00:06:48.416 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-10-07 05:25:52.149810] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:06:48.416 [2024-10-07 05:25:52.150240] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:06:48.416 [2024-10-07 05:25:52.150308] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:06:48.416 passed 00:06:48.416 Test: test_nvmf_tcp_tls_generate_tls_psk ...passed[2024-10-07 05:25:52.150591] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:06:48.416 [2024-10-07 05:25:52.150663] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:06:48.416 00:06:48.416 00:06:48.416 Run Summary: Type Total Ran Passed Failed Inactive 00:06:48.416 suites 1 1 n/a 0 0 00:06:48.416 tests 17 17 17 0 0 00:06:48.416 asserts 222 222 222 0 n/a 00:06:48.416 00:06:48.416 Elapsed time = 0.161 seconds 00:06:48.416 05:25:52 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:06:48.416 00:06:48.416 00:06:48.416 CUnit - A unit testing framework for C - Version 2.1-3 00:06:48.417 http://cunit.sourceforge.net/ 00:06:48.417 00:06:48.417 00:06:48.417 Suite: nvmf 00:06:48.417 Test: test_nvmf_tgt_create_poll_group ...passed 00:06:48.417 00:06:48.417 Run Summary: Type Total Ran Passed Failed Inactive 00:06:48.417 suites 1 1 n/a 0 0 00:06:48.417 tests 1 1 1 0 0 00:06:48.417 asserts 17 17 17 0 n/a 00:06:48.417 00:06:48.417 Elapsed time = 0.023 seconds 00:06:48.417 00:06:48.417 real 0m0.481s 00:06:48.417 user 0m0.249s 00:06:48.417 sys 0m0.234s 00:06:48.417 ************************************ 00:06:48.417 END TEST unittest_nvmf 00:06:48.417 ************************************ 00:06:48.417 05:25:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.417 05:25:52 -- common/autotest_common.sh@10 -- # set +x 00:06:48.417 05:25:52 -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:48.417 05:25:52 -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:48.417 05:25:52 -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:06:48.417 05:25:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:48.417 05:25:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.417 05:25:52 -- common/autotest_common.sh@10 -- # set +x 00:06:48.417 ************************************ 00:06:48.417 START TEST unittest_nvmf_rdma 00:06:48.417 ************************************ 00:06:48.417 05:25:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:06:48.677 00:06:48.677 00:06:48.677 CUnit - A unit testing framework for C - Version 2.1-3 00:06:48.677 http://cunit.sourceforge.net/ 00:06:48.677 00:06:48.677 00:06:48.677 Suite: nvmf 00:06:48.677 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-10-07 05:25:52.404502] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:06:48.677 [2024-10-07 05:25:52.405268] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1964:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:06:48.677 [2024-10-07 05:25:52.405492] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1964:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:06:48.677 passed 00:06:48.677 Test: test_spdk_nvmf_rdma_request_process ...passed 00:06:48.677 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:06:48.677 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:06:48.677 Test: test_nvmf_rdma_opts_init ...passed 00:06:48.677 Test: test_nvmf_rdma_request_free_data ...passed 00:06:48.677 Test: test_nvmf_rdma_update_ibv_state ...[2024-10-07 05:25:52.407097] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 614:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:06:48.677 [2024-10-07 05:25:52.407283] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 625:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:06:48.677 passed 00:06:48.677 Test: test_nvmf_rdma_resources_create ...passed 00:06:48.677 Test: test_nvmf_rdma_qpair_compare ...passed 00:06:48.677 Test: test_nvmf_rdma_resize_cq ...[2024-10-07 05:25:52.408764] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1006:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:06:48.677 Using CQ of insufficient size may lead to CQ overrun 00:06:48.677 [2024-10-07 05:25:52.409021] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1011:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:06:48.677 [2024-10-07 05:25:52.409210] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1019:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:06:48.677 passed 00:06:48.677 00:06:48.677 Run Summary: Type Total Ran Passed Failed Inactive 00:06:48.677 suites 1 1 n/a 0 0 00:06:48.677 tests 10 10 10 0 0 00:06:48.677 asserts 584 584 584 0 n/a 00:06:48.677 00:06:48.677 Elapsed time = 0.004 seconds 00:06:48.677 00:06:48.677 real 0m0.045s 00:06:48.677 user 0m0.029s 00:06:48.677 sys 0m0.015s 00:06:48.677 05:25:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.677 ************************************ 00:06:48.677 END TEST unittest_nvmf_rdma 00:06:48.677 05:25:52 -- common/autotest_common.sh@10 -- # set +x 00:06:48.677 ************************************ 00:06:48.677 05:25:52 -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:48.677 05:25:52 -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:06:48.677 05:25:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:48.677 05:25:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.677 05:25:52 -- common/autotest_common.sh@10 -- # set +x 00:06:48.677 ************************************ 00:06:48.677 START TEST unittest_scsi 00:06:48.677 ************************************ 00:06:48.677 05:25:52 -- common/autotest_common.sh@1104 -- # unittest_scsi 00:06:48.677 05:25:52 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:06:48.677 00:06:48.677 00:06:48.677 CUnit - A unit testing framework for C - Version 2.1-3 00:06:48.677 http://cunit.sourceforge.net/ 00:06:48.677 00:06:48.677 00:06:48.677 Suite: dev_suite 00:06:48.677 Test: dev_destruct_null_dev ...passed 00:06:48.677 Test: dev_destruct_zero_luns ...passed 00:06:48.677 Test: dev_destruct_null_lun ...passed 00:06:48.677 Test: dev_destruct_success ...passed 00:06:48.677 Test: dev_construct_num_luns_zero ...[2024-10-07 05:25:52.508451] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:06:48.677 passed 00:06:48.677 Test: dev_construct_no_lun_zero ...[2024-10-07 05:25:52.508743] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:06:48.677 passed 00:06:48.677 Test: dev_construct_null_lun ...passed 00:06:48.677 Test: dev_construct_name_too_long ...passed 00:06:48.677 Test: dev_construct_success ...passed 00:06:48.677 Test: dev_construct_success_lun_zero_not_first ...passed[2024-10-07 05:25:52.508799] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:06:48.677 [2024-10-07 05:25:52.508850] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:06:48.677 00:06:48.677 Test: dev_queue_mgmt_task_success ...passed 00:06:48.677 Test: dev_queue_task_success ...passed 00:06:48.677 Test: dev_stop_success ...passed 00:06:48.677 Test: dev_add_port_max_ports ...[2024-10-07 05:25:52.509122] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:06:48.677 passed 00:06:48.677 Test: dev_add_port_construct_failure1 ...passed 00:06:48.677 Test: dev_add_port_construct_failure2 ...[2024-10-07 05:25:52.509222] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:06:48.677 [2024-10-07 05:25:52.509316] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:06:48.677 passed 00:06:48.677 Test: dev_add_port_success1 ...passed 00:06:48.677 Test: dev_add_port_success2 ...passed 00:06:48.677 Test: dev_add_port_success3 ...passed 00:06:48.677 Test: dev_find_port_by_id_num_ports_zero ...passed 00:06:48.677 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:06:48.677 Test: dev_find_port_by_id_success ...passed 00:06:48.677 Test: dev_add_lun_bdev_not_found ...passed 00:06:48.677 Test: dev_add_lun_no_free_lun_id ...[2024-10-07 05:25:52.509700] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:06:48.677 passed 00:06:48.677 Test: dev_add_lun_success1 ...passed 00:06:48.677 Test: dev_add_lun_success2 ...passed 00:06:48.677 Test: dev_check_pending_tasks ...passed 00:06:48.677 Test: dev_iterate_luns ...passed 00:06:48.677 Test: dev_find_free_lun ...passed 00:06:48.677 00:06:48.677 Run Summary: Type Total Ran Passed Failed Inactive 00:06:48.677 suites 1 1 n/a 0 0 00:06:48.677 tests 29 29 29 0 0 00:06:48.677 asserts 97 97 97 0 n/a 00:06:48.677 00:06:48.677 Elapsed time = 0.002 seconds 00:06:48.677 05:25:52 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:06:48.677 00:06:48.677 00:06:48.677 CUnit - A unit testing framework for C - Version 2.1-3 00:06:48.677 http://cunit.sourceforge.net/ 00:06:48.677 00:06:48.677 00:06:48.677 Suite: lun_suite 00:06:48.677 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-10-07 05:25:52.539487] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:06:48.677 passed 00:06:48.677 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-10-07 05:25:52.539848] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:06:48.677 passed 00:06:48.677 Test: lun_task_mgmt_execute_lun_reset ...passed 00:06:48.677 Test: lun_task_mgmt_execute_target_reset ...passed 00:06:48.677 Test: lun_task_mgmt_execute_invalid_case ...passed 00:06:48.677 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...[2024-10-07 05:25:52.540015] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:06:48.678 passed 00:06:48.678 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:06:48.678 Test: lun_append_task_null_lun_not_supported ...passed 00:06:48.678 Test: lun_execute_scsi_task_pending ...passed 00:06:48.678 Test: lun_execute_scsi_task_complete ...passed 00:06:48.678 Test: lun_execute_scsi_task_resize ...passed 00:06:48.678 Test: lun_destruct_success ...passed 00:06:48.678 Test: lun_construct_null_ctx ...[2024-10-07 05:25:52.540196] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:06:48.678 passed 00:06:48.678 Test: lun_construct_success ...passed 00:06:48.678 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:06:48.678 Test: lun_reset_task_suspend_scsi_task ...passed 00:06:48.678 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:06:48.678 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:06:48.678 00:06:48.678 Run Summary: Type Total Ran Passed Failed Inactive 00:06:48.678 suites 1 1 n/a 0 0 00:06:48.678 tests 18 18 18 0 0 00:06:48.678 asserts 153 153 153 0 n/a 00:06:48.678 00:06:48.678 Elapsed time = 0.001 seconds 00:06:48.678 05:25:52 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:06:48.678 00:06:48.678 00:06:48.678 CUnit - A unit testing framework for C - Version 2.1-3 00:06:48.678 http://cunit.sourceforge.net/ 00:06:48.678 00:06:48.678 00:06:48.678 Suite: scsi_suite 00:06:48.678 Test: scsi_init ...passed 00:06:48.678 00:06:48.678 Run Summary: Type Total Ran Passed Failed Inactive 00:06:48.678 suites 1 1 n/a 0 0 00:06:48.678 tests 1 1 1 0 0 00:06:48.678 asserts 1 1 1 0 n/a 00:06:48.678 00:06:48.678 Elapsed time = 0.000 seconds 00:06:48.678 05:25:52 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:06:48.678 00:06:48.678 00:06:48.678 CUnit - A unit testing framework for C - Version 2.1-3 00:06:48.678 http://cunit.sourceforge.net/ 00:06:48.678 00:06:48.678 00:06:48.678 Suite: translation_suite 00:06:48.678 Test: mode_select_6_test ...passed 00:06:48.678 Test: mode_select_6_test2 ...passed 00:06:48.678 Test: mode_sense_6_test ...passed 00:06:48.678 Test: mode_sense_10_test ...passed 00:06:48.678 Test: inquiry_evpd_test ...passed 00:06:48.678 Test: inquiry_standard_test ...passed 00:06:48.678 Test: inquiry_overflow_test ...passed 00:06:48.678 Test: task_complete_test ...passed 00:06:48.678 Test: lba_range_test ...passed 00:06:48.678 Test: xfer_len_test ...[2024-10-07 05:25:52.596975] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:06:48.678 passed 00:06:48.678 Test: xfer_test ...passed 00:06:48.678 Test: scsi_name_padding_test ...passed 00:06:48.678 Test: get_dif_ctx_test ...passed 00:06:48.678 Test: unmap_split_test ...passed 00:06:48.678 00:06:48.678 Run Summary: Type Total Ran Passed Failed Inactive 00:06:48.678 suites 1 1 n/a 0 0 00:06:48.678 tests 14 14 14 0 0 00:06:48.678 asserts 1200 1200 1200 0 n/a 00:06:48.678 00:06:48.678 Elapsed time = 0.004 seconds 00:06:48.678 05:25:52 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:06:48.678 00:06:48.678 00:06:48.678 CUnit - A unit testing framework for C - Version 2.1-3 00:06:48.678 http://cunit.sourceforge.net/ 00:06:48.678 00:06:48.678 00:06:48.678 Suite: reservation_suite 00:06:48.678 Test: test_reservation_register ...[2024-10-07 05:25:52.626700] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:48.678 passed 00:06:48.678 Test: test_reservation_reserve ...[2024-10-07 05:25:52.627021] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:48.678 [2024-10-07 05:25:52.627102] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:06:48.678 passed 00:06:48.678 Test: test_reservation_preempt_non_all_regs ...[2024-10-07 05:25:52.627202] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:06:48.678 [2024-10-07 05:25:52.627276] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:48.678 passed 00:06:48.678 Test: test_reservation_preempt_all_regs ...[2024-10-07 05:25:52.627353] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:06:48.678 [2024-10-07 05:25:52.627476] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:48.678 passed 00:06:48.678 Test: test_reservation_cmds_conflict ...[2024-10-07 05:25:52.627633] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:48.678 [2024-10-07 05:25:52.627706] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:06:48.678 [2024-10-07 05:25:52.627751] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:06:48.678 [2024-10-07 05:25:52.627784] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:06:48.678 passed 00:06:48.678 Test: test_scsi2_reserve_release ...passed 00:06:48.678 Test: test_pr_with_scsi2_reserve_release ...[2024-10-07 05:25:52.627836] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:06:48.678 [2024-10-07 05:25:52.627870] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:06:48.678 [2024-10-07 05:25:52.627964] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:48.678 passed 00:06:48.678 00:06:48.678 Run Summary: Type Total Ran Passed Failed Inactive 00:06:48.678 suites 1 1 n/a 0 0 00:06:48.678 tests 7 7 7 0 0 00:06:48.678 asserts 257 257 257 0 n/a 00:06:48.678 00:06:48.678 Elapsed time = 0.001 seconds 00:06:48.678 00:06:48.678 real 0m0.145s 00:06:48.678 user 0m0.091s 00:06:48.678 sys 0m0.055s 00:06:48.678 05:25:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.678 ************************************ 00:06:48.678 05:25:52 -- common/autotest_common.sh@10 -- # set +x 00:06:48.678 END TEST unittest_scsi 00:06:48.678 ************************************ 00:06:48.937 05:25:52 -- unit/unittest.sh@276 -- # uname -s 00:06:48.937 05:25:52 -- unit/unittest.sh@276 -- # '[' Linux = Linux ']' 00:06:48.937 05:25:52 -- unit/unittest.sh@277 -- # run_test unittest_sock unittest_sock 00:06:48.937 05:25:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:48.937 05:25:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.937 05:25:52 -- common/autotest_common.sh@10 -- # set +x 00:06:48.937 ************************************ 00:06:48.937 START TEST unittest_sock 00:06:48.937 ************************************ 00:06:48.937 05:25:52 -- common/autotest_common.sh@1104 -- # unittest_sock 00:06:48.937 05:25:52 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:06:48.937 00:06:48.937 00:06:48.937 CUnit - A unit testing framework for C - Version 2.1-3 00:06:48.937 http://cunit.sourceforge.net/ 00:06:48.937 00:06:48.937 00:06:48.937 Suite: sock 00:06:48.937 Test: posix_sock ...passed 00:06:48.937 Test: ut_sock ...passed 00:06:48.937 Test: posix_sock_group ...passed 00:06:48.937 Test: ut_sock_group ...passed 00:06:48.937 Test: posix_sock_group_fairness ...passed 00:06:48.937 Test: _posix_sock_close ...passed 00:06:48.937 Test: sock_get_default_opts ...passed 00:06:48.937 Test: ut_sock_impl_get_set_opts ...passed 00:06:48.937 Test: posix_sock_impl_get_set_opts ...passed 00:06:48.937 Test: ut_sock_map ...passed 00:06:48.937 Test: override_impl_opts ...passed 00:06:48.937 Test: ut_sock_group_get_ctx ...passed 00:06:48.937 00:06:48.937 Run Summary: Type Total Ran Passed Failed Inactive 00:06:48.937 suites 1 1 n/a 0 0 00:06:48.937 tests 12 12 12 0 0 00:06:48.937 asserts 349 349 349 0 n/a 00:06:48.937 00:06:48.937 Elapsed time = 0.008 seconds 00:06:48.937 05:25:52 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:06:48.937 00:06:48.937 00:06:48.937 CUnit - A unit testing framework for C - Version 2.1-3 00:06:48.937 http://cunit.sourceforge.net/ 00:06:48.937 00:06:48.937 00:06:48.937 Suite: posix 00:06:48.937 Test: flush ...passed 00:06:48.937 00:06:48.937 Run Summary: Type Total Ran Passed Failed Inactive 00:06:48.937 suites 1 1 n/a 0 0 00:06:48.937 tests 1 1 1 0 0 00:06:48.937 asserts 28 28 28 0 n/a 00:06:48.937 00:06:48.937 Elapsed time = 0.000 seconds 00:06:48.937 05:25:52 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:48.937 00:06:48.937 real 0m0.093s 00:06:48.937 user 0m0.038s 00:06:48.937 sys 0m0.032s 00:06:48.937 05:25:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.938 05:25:52 -- common/autotest_common.sh@10 -- # set +x 00:06:48.938 ************************************ 00:06:48.938 END TEST unittest_sock 00:06:48.938 ************************************ 00:06:48.938 05:25:52 -- unit/unittest.sh@279 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:06:48.938 05:25:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:48.938 05:25:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.938 05:25:52 -- common/autotest_common.sh@10 -- # set +x 00:06:48.938 ************************************ 00:06:48.938 START TEST unittest_thread 00:06:48.938 ************************************ 00:06:48.938 05:25:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:06:48.938 00:06:48.938 00:06:48.938 CUnit - A unit testing framework for C - Version 2.1-3 00:06:48.938 http://cunit.sourceforge.net/ 00:06:48.938 00:06:48.938 00:06:48.938 Suite: io_channel 00:06:48.938 Test: thread_alloc ...passed 00:06:48.938 Test: thread_send_msg ...passed 00:06:48.938 Test: thread_poller ...passed 00:06:48.938 Test: poller_pause ...passed 00:06:48.938 Test: thread_for_each ...passed 00:06:48.938 Test: for_each_channel_remove ...passed 00:06:48.938 Test: for_each_channel_unreg ...[2024-10-07 05:25:52.896521] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x7ffd83437510 already registered (old:0x613000000200 new:0x6130000003c0) 00:06:48.938 passed 00:06:48.938 Test: thread_name ...passed 00:06:48.938 Test: channel ...[2024-10-07 05:25:52.900925] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2297:spdk_get_io_channel: *ERROR*: could not find io_device 0x55a7b0d670e0 00:06:48.938 passed 00:06:48.938 Test: channel_destroy_races ...passed 00:06:48.938 Test: thread_exit_test ...[2024-10-07 05:25:52.906078] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 629:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:06:48.938 passed 00:06:48.938 Test: thread_update_stats_test ...passed 00:06:48.938 Test: nested_channel ...passed 00:06:49.197 Test: device_unregister_and_thread_exit_race ...passed 00:06:49.197 Test: cache_closest_timed_poller ...passed 00:06:49.197 Test: multi_timed_pollers_have_same_expiration ...passed 00:06:49.197 Test: io_device_lookup ...passed 00:06:49.197 Test: spdk_spin ...[2024-10-07 05:25:52.916931] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3061:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:06:49.197 [2024-10-07 05:25:52.917090] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffd83437500 00:06:49.197 [2024-10-07 05:25:52.917324] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3099:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:06:49.197 [2024-10-07 05:25:52.919086] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:49.197 [2024-10-07 05:25:52.919289] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffd83437500 00:06:49.197 [2024-10-07 05:25:52.919426] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:06:49.197 [2024-10-07 05:25:52.919621] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffd83437500 00:06:49.197 [2024-10-07 05:25:52.919806] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:06:49.197 [2024-10-07 05:25:52.919977] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffd83437500 00:06:49.197 [2024-10-07 05:25:52.920142] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3043:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:06:49.197 [2024-10-07 05:25:52.920301] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffd83437500 00:06:49.197 passed 00:06:49.197 Test: for_each_channel_and_thread_exit_race ...passed 00:06:49.197 Test: for_each_thread_and_thread_exit_race ...passed 00:06:49.197 00:06:49.197 Run Summary: Type Total Ran Passed Failed Inactive 00:06:49.197 suites 1 1 n/a 0 0 00:06:49.197 tests 20 20 20 0 0 00:06:49.197 asserts 409 409 409 0 n/a 00:06:49.197 00:06:49.197 Elapsed time = 0.049 seconds 00:06:49.197 00:06:49.197 real 0m0.089s 00:06:49.197 user 0m0.064s 00:06:49.197 sys 0m0.024s 00:06:49.197 05:25:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.197 05:25:52 -- common/autotest_common.sh@10 -- # set +x 00:06:49.197 ************************************ 00:06:49.197 END TEST unittest_thread 00:06:49.197 ************************************ 00:06:49.197 05:25:52 -- unit/unittest.sh@280 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:06:49.197 05:25:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:49.197 05:25:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.197 05:25:52 -- common/autotest_common.sh@10 -- # set +x 00:06:49.197 ************************************ 00:06:49.197 START TEST unittest_iobuf 00:06:49.197 ************************************ 00:06:49.197 05:25:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:06:49.197 00:06:49.197 00:06:49.197 CUnit - A unit testing framework for C - Version 2.1-3 00:06:49.197 http://cunit.sourceforge.net/ 00:06:49.197 00:06:49.197 00:06:49.197 Suite: io_channel 00:06:49.197 Test: iobuf ...passed 00:06:49.197 Test: iobuf_cache ...[2024-10-07 05:25:53.032108] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:06:49.197 [2024-10-07 05:25:53.032785] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:06:49.197 [2024-10-07 05:25:53.032996] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:06:49.197 [2024-10-07 05:25:53.033180] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:06:49.197 [2024-10-07 05:25:53.033318] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:06:49.197 [2024-10-07 05:25:53.033412] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:06:49.197 passed 00:06:49.197 00:06:49.197 Run Summary: Type Total Ran Passed Failed Inactive 00:06:49.197 suites 1 1 n/a 0 0 00:06:49.197 tests 2 2 2 0 0 00:06:49.197 asserts 107 107 107 0 n/a 00:06:49.197 00:06:49.197 Elapsed time = 0.006 seconds 00:06:49.197 00:06:49.197 real 0m0.044s 00:06:49.197 user 0m0.033s 00:06:49.197 sys 0m0.010s 00:06:49.197 05:25:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.197 05:25:53 -- common/autotest_common.sh@10 -- # set +x 00:06:49.197 ************************************ 00:06:49.197 END TEST unittest_iobuf 00:06:49.197 ************************************ 00:06:49.197 05:25:53 -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:06:49.197 05:25:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:49.197 05:25:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.197 05:25:53 -- common/autotest_common.sh@10 -- # set +x 00:06:49.197 ************************************ 00:06:49.197 START TEST unittest_util 00:06:49.197 ************************************ 00:06:49.197 05:25:53 -- common/autotest_common.sh@1104 -- # unittest_util 00:06:49.197 05:25:53 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:06:49.197 00:06:49.197 00:06:49.197 CUnit - A unit testing framework for C - Version 2.1-3 00:06:49.197 http://cunit.sourceforge.net/ 00:06:49.197 00:06:49.197 00:06:49.197 Suite: base64 00:06:49.197 Test: test_base64_get_encoded_strlen ...passed 00:06:49.197 Test: test_base64_get_decoded_len ...passed 00:06:49.197 Test: test_base64_encode ...passed 00:06:49.197 Test: test_base64_decode ...passed 00:06:49.197 Test: test_base64_urlsafe_encode ...passed 00:06:49.197 Test: test_base64_urlsafe_decode ...passed 00:06:49.197 00:06:49.197 Run Summary: Type Total Ran Passed Failed Inactive 00:06:49.197 suites 1 1 n/a 0 0 00:06:49.197 tests 6 6 6 0 0 00:06:49.198 asserts 112 112 112 0 n/a 00:06:49.198 00:06:49.198 Elapsed time = 0.000 seconds 00:06:49.198 05:25:53 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:06:49.198 00:06:49.198 00:06:49.198 CUnit - A unit testing framework for C - Version 2.1-3 00:06:49.198 http://cunit.sourceforge.net/ 00:06:49.198 00:06:49.198 00:06:49.198 Suite: bit_array 00:06:49.198 Test: test_1bit ...passed 00:06:49.198 Test: test_64bit ...passed 00:06:49.198 Test: test_find ...passed 00:06:49.198 Test: test_resize ...passed 00:06:49.198 Test: test_errors ...passed 00:06:49.198 Test: test_count ...passed 00:06:49.198 Test: test_mask_store_load ...passed 00:06:49.198 Test: test_mask_clear ...passed 00:06:49.198 00:06:49.198 Run Summary: Type Total Ran Passed Failed Inactive 00:06:49.198 suites 1 1 n/a 0 0 00:06:49.198 tests 8 8 8 0 0 00:06:49.198 asserts 5075 5075 5075 0 n/a 00:06:49.198 00:06:49.198 Elapsed time = 0.002 seconds 00:06:49.457 05:25:53 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:06:49.457 00:06:49.457 00:06:49.457 CUnit - A unit testing framework for C - Version 2.1-3 00:06:49.457 http://cunit.sourceforge.net/ 00:06:49.457 00:06:49.457 00:06:49.457 Suite: cpuset 00:06:49.457 Test: test_cpuset ...passed 00:06:49.457 Test: test_cpuset_parse ...[2024-10-07 05:25:53.185659] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:06:49.457 [2024-10-07 05:25:53.186015] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:06:49.457 [2024-10-07 05:25:53.186150] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:06:49.457 [2024-10-07 05:25:53.186239] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:06:49.457 [2024-10-07 05:25:53.186280] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:06:49.457 [2024-10-07 05:25:53.186326] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:06:49.457 passed 00:06:49.457 Test: test_cpuset_fmt ...[2024-10-07 05:25:53.186372] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:06:49.457 [2024-10-07 05:25:53.186431] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:06:49.457 passed 00:06:49.457 00:06:49.457 Run Summary: Type Total Ran Passed Failed Inactive 00:06:49.457 suites 1 1 n/a 0 0 00:06:49.457 tests 3 3 3 0 0 00:06:49.457 asserts 65 65 65 0 n/a 00:06:49.457 00:06:49.457 Elapsed time = 0.002 seconds 00:06:49.457 05:25:53 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:06:49.457 00:06:49.457 00:06:49.457 CUnit - A unit testing framework for C - Version 2.1-3 00:06:49.457 http://cunit.sourceforge.net/ 00:06:49.457 00:06:49.457 00:06:49.457 Suite: crc16 00:06:49.457 Test: test_crc16_t10dif ...passed 00:06:49.457 Test: test_crc16_t10dif_seed ...passed 00:06:49.457 Test: test_crc16_t10dif_copy ...passed 00:06:49.457 00:06:49.457 Run Summary: Type Total Ran Passed Failed Inactive 00:06:49.457 suites 1 1 n/a 0 0 00:06:49.457 tests 3 3 3 0 0 00:06:49.457 asserts 5 5 5 0 n/a 00:06:49.457 00:06:49.457 Elapsed time = 0.000 seconds 00:06:49.457 05:25:53 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:06:49.457 00:06:49.457 00:06:49.457 CUnit - A unit testing framework for C - Version 2.1-3 00:06:49.457 http://cunit.sourceforge.net/ 00:06:49.457 00:06:49.457 00:06:49.457 Suite: crc32_ieee 00:06:49.457 Test: test_crc32_ieee ...passed 00:06:49.457 00:06:49.458 Run Summary: Type Total Ran Passed Failed Inactive 00:06:49.458 suites 1 1 n/a 0 0 00:06:49.458 tests 1 1 1 0 0 00:06:49.458 asserts 1 1 1 0 n/a 00:06:49.458 00:06:49.458 Elapsed time = 0.000 seconds 00:06:49.458 05:25:53 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:06:49.458 00:06:49.458 00:06:49.458 CUnit - A unit testing framework for C - Version 2.1-3 00:06:49.458 http://cunit.sourceforge.net/ 00:06:49.458 00:06:49.458 00:06:49.458 Suite: crc32c 00:06:49.458 Test: test_crc32c ...passed 00:06:49.458 Test: test_crc32c_nvme ...passed 00:06:49.458 00:06:49.458 Run Summary: Type Total Ran Passed Failed Inactive 00:06:49.458 suites 1 1 n/a 0 0 00:06:49.458 tests 2 2 2 0 0 00:06:49.458 asserts 16 16 16 0 n/a 00:06:49.458 00:06:49.458 Elapsed time = 0.000 seconds 00:06:49.458 05:25:53 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:06:49.458 00:06:49.458 00:06:49.458 CUnit - A unit testing framework for C - Version 2.1-3 00:06:49.458 http://cunit.sourceforge.net/ 00:06:49.458 00:06:49.458 00:06:49.458 Suite: crc64 00:06:49.458 Test: test_crc64_nvme ...passed 00:06:49.458 00:06:49.458 Run Summary: Type Total Ran Passed Failed Inactive 00:06:49.458 suites 1 1 n/a 0 0 00:06:49.458 tests 1 1 1 0 0 00:06:49.458 asserts 4 4 4 0 n/a 00:06:49.458 00:06:49.458 Elapsed time = 0.001 seconds 00:06:49.458 05:25:53 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:06:49.458 00:06:49.458 00:06:49.458 CUnit - A unit testing framework for C - Version 2.1-3 00:06:49.458 http://cunit.sourceforge.net/ 00:06:49.458 00:06:49.458 00:06:49.458 Suite: string 00:06:49.458 Test: test_parse_ip_addr ...passed 00:06:49.458 Test: test_str_chomp ...passed 00:06:49.458 Test: test_parse_capacity ...passed 00:06:49.458 Test: test_sprintf_append_realloc ...passed 00:06:49.458 Test: test_strtol ...passed 00:06:49.458 Test: test_strtoll ...passed 00:06:49.458 Test: test_strarray ...passed 00:06:49.458 Test: test_strcpy_replace ...passed 00:06:49.458 00:06:49.458 Run Summary: Type Total Ran Passed Failed Inactive 00:06:49.458 suites 1 1 n/a 0 0 00:06:49.458 tests 8 8 8 0 0 00:06:49.458 asserts 161 161 161 0 n/a 00:06:49.458 00:06:49.458 Elapsed time = 0.001 seconds 00:06:49.458 05:25:53 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:06:49.458 00:06:49.458 00:06:49.458 CUnit - A unit testing framework for C - Version 2.1-3 00:06:49.458 http://cunit.sourceforge.net/ 00:06:49.458 00:06:49.458 00:06:49.458 Suite: dif 00:06:49.458 Test: dif_generate_and_verify_test ...[2024-10-07 05:25:53.343113] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:06:49.458 [2024-10-07 05:25:53.344313] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:06:49.458 [2024-10-07 05:25:53.344758] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:06:49.458 [2024-10-07 05:25:53.345792] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:06:49.458 [2024-10-07 05:25:53.346589] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:06:49.458 [2024-10-07 05:25:53.347297] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:06:49.458 passed 00:06:49.458 Test: dif_disable_check_test ...[2024-10-07 05:25:53.349604] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:06:49.458 [2024-10-07 05:25:53.350455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:06:49.458 [2024-10-07 05:25:53.351064] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:06:49.458 passed 00:06:49.458 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-10-07 05:25:53.352758] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:06:49.458 [2024-10-07 05:25:53.353412] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:06:49.458 [2024-10-07 05:25:53.354057] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:06:49.458 [2024-10-07 05:25:53.354897] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:06:49.458 [2024-10-07 05:25:53.355599] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:49.458 [2024-10-07 05:25:53.356231] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:49.458 [2024-10-07 05:25:53.356837] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:49.458 [2024-10-07 05:25:53.357418] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:49.458 [2024-10-07 05:25:53.358041] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:06:49.458 [2024-10-07 05:25:53.358720] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:06:49.458 [2024-10-07 05:25:53.359369] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:06:49.458 passed 00:06:49.458 Test: dif_apptag_mask_test ...[2024-10-07 05:25:53.360040] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:06:49.458 [2024-10-07 05:25:53.360495] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:06:49.458 passed 00:06:49.458 Test: dif_sec_512_md_0_error_test ...[2024-10-07 05:25:53.360898] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:49.458 passed 00:06:49.458 Test: dif_sec_4096_md_0_error_test ...[2024-10-07 05:25:53.361085] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:49.458 [2024-10-07 05:25:53.361276] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:49.458 passed 00:06:49.458 Test: dif_sec_4100_md_128_error_test ...[2024-10-07 05:25:53.361484] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:06:49.458 [2024-10-07 05:25:53.361656] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:06:49.458 passed 00:06:49.458 Test: dif_guard_seed_test ...passed 00:06:49.458 Test: dif_guard_value_test ...passed 00:06:49.458 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:06:49.458 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:06:49.458 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:06:49.458 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:49.458 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:49.458 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:06:49.458 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:06:49.458 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:06:49.458 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:06:49.458 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:06:49.458 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:06:49.458 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:06:49.458 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:06:49.458 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:06:49.458 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:06:49.458 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:06:49.458 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:49.458 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:49.458 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-10-07 05:25:53.407309] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc4c, Actual=fd4c 00:06:49.458 [2024-10-07 05:25:53.410136] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff21, Actual=fe21 00:06:49.458 [2024-10-07 05:25:53.412765] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.458 [2024-10-07 05:25:53.415390] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.458 [2024-10-07 05:25:53.418070] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.458 [2024-10-07 05:25:53.420670] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.458 [2024-10-07 05:25:53.423384] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e057 00:06:49.458 [2024-10-07 05:25:53.424704] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=3690 00:06:49.458 [2024-10-07 05:25:53.425958] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab752ed, Actual=1ab753ed 00:06:49.458 [2024-10-07 05:25:53.428565] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574760, Actual=38574660 00:06:49.458 [2024-10-07 05:25:53.431202] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.719 [2024-10-07 05:25:53.434102] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.719 [2024-10-07 05:25:53.436859] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:06:49.719 [2024-10-07 05:25:53.439540] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:06:49.720 [2024-10-07 05:25:53.442129] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e8e14e95 00:06:49.720 [2024-10-07 05:25:53.443431] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=7ead246b 00:06:49.720 [2024-10-07 05:25:53.444750] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a6728ecc20d3, Actual=a576a7728ecc20d3 00:06:49.720 [2024-10-07 05:25:53.447360] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010b2d4837a266, Actual=88010a2d4837a266 00:06:49.720 [2024-10-07 05:25:53.449949] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.720 [2024-10-07 05:25:53.452560] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.720 [2024-10-07 05:25:53.455222] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.720 [2024-10-07 05:25:53.457834] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.720 [2024-10-07 05:25:53.460442] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=eb45db40fc46b863 00:06:49.720 [2024-10-07 05:25:53.461704] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9bd7adb4d8b5548c 00:06:49.720 passed 00:06:49.720 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-10-07 05:25:53.462128] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc4c, Actual=fd4c 00:06:49.720 [2024-10-07 05:25:53.462580] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff21, Actual=fe21 00:06:49.720 [2024-10-07 05:25:53.462997] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.720 [2024-10-07 05:25:53.463422] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.720 [2024-10-07 05:25:53.463911] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.720 [2024-10-07 05:25:53.464339] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.720 [2024-10-07 05:25:53.464881] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e057 00:06:49.720 [2024-10-07 05:25:53.465220] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=3690 00:06:49.720 [2024-10-07 05:25:53.465536] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab752ed, Actual=1ab753ed 00:06:49.720 [2024-10-07 05:25:53.465967] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574760, Actual=38574660 00:06:49.720 [2024-10-07 05:25:53.466412] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.720 [2024-10-07 05:25:53.466881] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.720 [2024-10-07 05:25:53.467312] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:06:49.720 [2024-10-07 05:25:53.467788] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:06:49.720 [2024-10-07 05:25:53.468225] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e8e14e95 00:06:49.720 [2024-10-07 05:25:53.468538] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=7ead246b 00:06:49.720 [2024-10-07 05:25:53.468854] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a6728ecc20d3, Actual=a576a7728ecc20d3 00:06:49.720 [2024-10-07 05:25:53.469266] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010b2d4837a266, Actual=88010a2d4837a266 00:06:49.720 [2024-10-07 05:25:53.469718] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.720 [2024-10-07 05:25:53.470183] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.720 [2024-10-07 05:25:53.470749] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.720 [2024-10-07 05:25:53.471314] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.720 [2024-10-07 05:25:53.471812] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=eb45db40fc46b863 00:06:49.720 [2024-10-07 05:25:53.472179] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9bd7adb4d8b5548c 00:06:49.720 passed 00:06:49.720 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-10-07 05:25:53.472546] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc4c, Actual=fd4c 00:06:49.720 [2024-10-07 05:25:53.472968] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff21, Actual=fe21 00:06:49.720 [2024-10-07 05:25:53.473395] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.720 [2024-10-07 05:25:53.473802] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.720 [2024-10-07 05:25:53.474226] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.720 [2024-10-07 05:25:53.474679] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.720 [2024-10-07 05:25:53.475131] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e057 00:06:49.720 [2024-10-07 05:25:53.475466] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=3690 00:06:49.720 [2024-10-07 05:25:53.475808] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab752ed, Actual=1ab753ed 00:06:49.720 [2024-10-07 05:25:53.476257] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574760, Actual=38574660 00:06:49.720 [2024-10-07 05:25:53.476695] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.720 [2024-10-07 05:25:53.477135] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.720 [2024-10-07 05:25:53.477587] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:06:49.720 [2024-10-07 05:25:53.478026] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:06:49.720 [2024-10-07 05:25:53.478460] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e8e14e95 00:06:49.720 [2024-10-07 05:25:53.478801] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=7ead246b 00:06:49.720 [2024-10-07 05:25:53.479139] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a6728ecc20d3, Actual=a576a7728ecc20d3 00:06:49.720 [2024-10-07 05:25:53.479597] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010b2d4837a266, Actual=88010a2d4837a266 00:06:49.720 [2024-10-07 05:25:53.480036] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.720 [2024-10-07 05:25:53.480463] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.720 [2024-10-07 05:25:53.480877] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.720 [2024-10-07 05:25:53.481277] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.720 [2024-10-07 05:25:53.481769] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=eb45db40fc46b863 00:06:49.720 [2024-10-07 05:25:53.482090] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9bd7adb4d8b5548c 00:06:49.720 passed 00:06:49.720 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-10-07 05:25:53.482482] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc4c, Actual=fd4c 00:06:49.720 [2024-10-07 05:25:53.482940] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff21, Actual=fe21 00:06:49.720 [2024-10-07 05:25:53.483393] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.720 [2024-10-07 05:25:53.483855] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.720 [2024-10-07 05:25:53.484310] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.720 [2024-10-07 05:25:53.484734] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.720 [2024-10-07 05:25:53.485171] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e057 00:06:49.720 [2024-10-07 05:25:53.485585] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=3690 00:06:49.720 [2024-10-07 05:25:53.485918] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab752ed, Actual=1ab753ed 00:06:49.720 [2024-10-07 05:25:53.486368] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574760, Actual=38574660 00:06:49.720 [2024-10-07 05:25:53.486928] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.720 [2024-10-07 05:25:53.487452] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.720 [2024-10-07 05:25:53.487906] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:06:49.720 [2024-10-07 05:25:53.488377] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:06:49.720 [2024-10-07 05:25:53.488826] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e8e14e95 00:06:49.720 [2024-10-07 05:25:53.489153] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=7ead246b 00:06:49.720 [2024-10-07 05:25:53.489478] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a6728ecc20d3, Actual=a576a7728ecc20d3 00:06:49.720 [2024-10-07 05:25:53.489909] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010b2d4837a266, Actual=88010a2d4837a266 00:06:49.721 [2024-10-07 05:25:53.490337] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.721 [2024-10-07 05:25:53.490827] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.721 [2024-10-07 05:25:53.491267] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.721 [2024-10-07 05:25:53.491722] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.721 [2024-10-07 05:25:53.492182] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=eb45db40fc46b863 00:06:49.721 [2024-10-07 05:25:53.492496] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9bd7adb4d8b5548c 00:06:49.721 passed 00:06:49.721 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-10-07 05:25:53.492881] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc4c, Actual=fd4c 00:06:49.721 [2024-10-07 05:25:53.493296] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff21, Actual=fe21 00:06:49.721 [2024-10-07 05:25:53.493723] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.721 [2024-10-07 05:25:53.494171] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.721 [2024-10-07 05:25:53.494632] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.721 [2024-10-07 05:25:53.495100] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.721 [2024-10-07 05:25:53.495528] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e057 00:06:49.721 [2024-10-07 05:25:53.495857] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=3690 00:06:49.721 passed 00:06:49.721 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-10-07 05:25:53.496259] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab752ed, Actual=1ab753ed 00:06:49.721 [2024-10-07 05:25:53.496680] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574760, Actual=38574660 00:06:49.721 [2024-10-07 05:25:53.497145] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.721 [2024-10-07 05:25:53.497589] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.721 [2024-10-07 05:25:53.498019] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:06:49.721 [2024-10-07 05:25:53.498433] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:06:49.721 [2024-10-07 05:25:53.498929] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e8e14e95 00:06:49.721 [2024-10-07 05:25:53.499257] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=7ead246b 00:06:49.721 [2024-10-07 05:25:53.499635] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a6728ecc20d3, Actual=a576a7728ecc20d3 00:06:49.721 [2024-10-07 05:25:53.500064] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010b2d4837a266, Actual=88010a2d4837a266 00:06:49.721 [2024-10-07 05:25:53.500495] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.721 [2024-10-07 05:25:53.500918] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.721 [2024-10-07 05:25:53.501340] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.721 [2024-10-07 05:25:53.501765] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.721 [2024-10-07 05:25:53.502210] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=eb45db40fc46b863 00:06:49.721 [2024-10-07 05:25:53.502528] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9bd7adb4d8b5548c 00:06:49.721 passed 00:06:49.721 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-10-07 05:25:53.502921] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc4c, Actual=fd4c 00:06:49.721 [2024-10-07 05:25:53.503359] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff21, Actual=fe21 00:06:49.721 [2024-10-07 05:25:53.503825] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.721 [2024-10-07 05:25:53.504277] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.721 [2024-10-07 05:25:53.504767] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.721 [2024-10-07 05:25:53.505238] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.721 [2024-10-07 05:25:53.505791] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e057 00:06:49.721 [2024-10-07 05:25:53.506143] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=3690 00:06:49.721 passed 00:06:49.721 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-10-07 05:25:53.506587] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab752ed, Actual=1ab753ed 00:06:49.721 [2024-10-07 05:25:53.507012] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574760, Actual=38574660 00:06:49.721 [2024-10-07 05:25:53.507474] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.721 [2024-10-07 05:25:53.507944] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.721 [2024-10-07 05:25:53.508392] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:06:49.721 [2024-10-07 05:25:53.508814] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:06:49.721 [2024-10-07 05:25:53.509236] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e8e14e95 00:06:49.721 [2024-10-07 05:25:53.509552] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=7ead246b 00:06:49.721 [2024-10-07 05:25:53.509953] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a6728ecc20d3, Actual=a576a7728ecc20d3 00:06:49.721 [2024-10-07 05:25:53.510372] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010b2d4837a266, Actual=88010a2d4837a266 00:06:49.721 [2024-10-07 05:25:53.510874] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.721 [2024-10-07 05:25:53.511426] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.721 [2024-10-07 05:25:53.511910] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.721 [2024-10-07 05:25:53.512344] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.721 [2024-10-07 05:25:53.512816] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=eb45db40fc46b863 00:06:49.721 [2024-10-07 05:25:53.513178] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9bd7adb4d8b5548c 00:06:49.721 passed 00:06:49.721 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:06:49.721 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:06:49.721 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:06:49.721 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:49.721 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:06:49.721 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:06:49.721 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:06:49.721 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:06:49.721 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:49.721 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-10-07 05:25:53.547610] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc4c, Actual=fd4c 00:06:49.721 [2024-10-07 05:25:53.548794] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=77ff, Actual=76ff 00:06:49.721 [2024-10-07 05:25:53.549664] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.721 [2024-10-07 05:25:53.550548] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.721 [2024-10-07 05:25:53.551421] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.721 [2024-10-07 05:25:53.552370] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.721 [2024-10-07 05:25:53.553227] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e057 00:06:49.721 [2024-10-07 05:25:53.554089] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=e5a7 00:06:49.721 [2024-10-07 05:25:53.554998] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab752ed, Actual=1ab753ed 00:06:49.721 [2024-10-07 05:25:53.555917] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bde3c11d, Actual=bde3c01d 00:06:49.721 [2024-10-07 05:25:53.556860] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.721 [2024-10-07 05:25:53.557904] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.721 [2024-10-07 05:25:53.558837] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:06:49.721 [2024-10-07 05:25:53.559787] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:06:49.721 [2024-10-07 05:25:53.560709] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e8e14e95 00:06:49.721 [2024-10-07 05:25:53.561594] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=ac5c22a7 00:06:49.722 [2024-10-07 05:25:53.562478] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a6728ecc20d3, Actual=a576a7728ecc20d3 00:06:49.722 [2024-10-07 05:25:53.563401] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ed636711c4f1168a, Actual=ed636611c4f1168a 00:06:49.722 [2024-10-07 05:25:53.564382] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.722 [2024-10-07 05:25:53.565288] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.722 [2024-10-07 05:25:53.566164] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.722 [2024-10-07 05:25:53.567071] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.722 [2024-10-07 05:25:53.567981] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=eb45db40fc46b863 00:06:49.722 [2024-10-07 05:25:53.568921] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=2bb08d82c658c648 00:06:49.722 passed 00:06:49.722 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-10-07 05:25:53.569282] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc4c, Actual=fd4c 00:06:49.722 [2024-10-07 05:25:53.569614] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=77ff, Actual=76ff 00:06:49.722 [2024-10-07 05:25:53.569926] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.722 [2024-10-07 05:25:53.570231] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.722 [2024-10-07 05:25:53.570551] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.722 [2024-10-07 05:25:53.570872] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.722 [2024-10-07 05:25:53.571156] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e057 00:06:49.722 [2024-10-07 05:25:53.571468] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=e5a7 00:06:49.722 [2024-10-07 05:25:53.571826] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab752ed, Actual=1ab753ed 00:06:49.722 [2024-10-07 05:25:53.572138] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bde3c11d, Actual=bde3c01d 00:06:49.722 [2024-10-07 05:25:53.572436] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.722 [2024-10-07 05:25:53.572729] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.722 [2024-10-07 05:25:53.573013] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:06:49.722 [2024-10-07 05:25:53.573300] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:06:49.722 [2024-10-07 05:25:53.573591] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e8e14e95 00:06:49.722 [2024-10-07 05:25:53.573871] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=ac5c22a7 00:06:49.722 [2024-10-07 05:25:53.574179] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a6728ecc20d3, Actual=a576a7728ecc20d3 00:06:49.722 [2024-10-07 05:25:53.574460] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ed636711c4f1168a, Actual=ed636611c4f1168a 00:06:49.722 [2024-10-07 05:25:53.574776] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.722 [2024-10-07 05:25:53.575062] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.722 [2024-10-07 05:25:53.575364] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.722 [2024-10-07 05:25:53.575667] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.722 [2024-10-07 05:25:53.575964] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=eb45db40fc46b863 00:06:49.722 [2024-10-07 05:25:53.576271] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=2bb08d82c658c648 00:06:49.722 passed 00:06:49.722 Test: dix_sec_512_md_0_error ...[2024-10-07 05:25:53.576529] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:49.722 passed 00:06:49.722 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:06:49.722 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:06:49.722 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:06:49.722 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:49.722 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:06:49.722 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:06:49.722 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:06:49.722 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:06:49.722 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:49.722 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-10-07 05:25:53.607885] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc4c, Actual=fd4c 00:06:49.722 [2024-10-07 05:25:53.608833] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=77ff, Actual=76ff 00:06:49.722 [2024-10-07 05:25:53.609774] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.722 [2024-10-07 05:25:53.610654] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.722 [2024-10-07 05:25:53.611495] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.722 [2024-10-07 05:25:53.612397] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.722 [2024-10-07 05:25:53.613203] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e057 00:06:49.722 [2024-10-07 05:25:53.614071] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=e5a7 00:06:49.722 [2024-10-07 05:25:53.614914] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab752ed, Actual=1ab753ed 00:06:49.722 [2024-10-07 05:25:53.615771] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bde3c11d, Actual=bde3c01d 00:06:49.722 [2024-10-07 05:25:53.616649] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.722 [2024-10-07 05:25:53.617496] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.722 [2024-10-07 05:25:53.618353] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:06:49.722 [2024-10-07 05:25:53.619225] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:06:49.722 [2024-10-07 05:25:53.620141] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e8e14e95 00:06:49.722 [2024-10-07 05:25:53.620979] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=ac5c22a7 00:06:49.722 [2024-10-07 05:25:53.621879] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a6728ecc20d3, Actual=a576a7728ecc20d3 00:06:49.722 [2024-10-07 05:25:53.622735] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ed636711c4f1168a, Actual=ed636611c4f1168a 00:06:49.722 [2024-10-07 05:25:53.623615] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.722 [2024-10-07 05:25:53.624519] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.722 [2024-10-07 05:25:53.625501] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.722 [2024-10-07 05:25:53.626527] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.722 [2024-10-07 05:25:53.627516] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=eb45db40fc46b863 00:06:49.722 [2024-10-07 05:25:53.628482] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=2bb08d82c658c648 00:06:49.722 passed 00:06:49.722 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-10-07 05:25:53.628946] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc4c, Actual=fd4c 00:06:49.722 [2024-10-07 05:25:53.629324] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=77ff, Actual=76ff 00:06:49.722 [2024-10-07 05:25:53.629716] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.722 [2024-10-07 05:25:53.630114] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.722 [2024-10-07 05:25:53.630497] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.722 [2024-10-07 05:25:53.630880] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.722 [2024-10-07 05:25:53.631179] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e057 00:06:49.722 [2024-10-07 05:25:53.631519] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=e5a7 00:06:49.722 [2024-10-07 05:25:53.631886] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab752ed, Actual=1ab753ed 00:06:49.722 [2024-10-07 05:25:53.632217] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bde3c11d, Actual=bde3c01d 00:06:49.722 [2024-10-07 05:25:53.632566] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.722 [2024-10-07 05:25:53.632913] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.722 [2024-10-07 05:25:53.633189] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:06:49.722 [2024-10-07 05:25:53.633476] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000058 00:06:49.722 [2024-10-07 05:25:53.633759] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e8e14e95 00:06:49.723 [2024-10-07 05:25:53.634126] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=ac5c22a7 00:06:49.723 [2024-10-07 05:25:53.634518] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a6728ecc20d3, Actual=a576a7728ecc20d3 00:06:49.723 [2024-10-07 05:25:53.634920] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ed636711c4f1168a, Actual=ed636611c4f1168a 00:06:49.723 [2024-10-07 05:25:53.635297] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.723 [2024-10-07 05:25:53.635704] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=188 00:06:49.723 [2024-10-07 05:25:53.636032] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.723 [2024-10-07 05:25:53.636390] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=158 00:06:49.723 [2024-10-07 05:25:53.636719] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=eb45db40fc46b863 00:06:49.723 [2024-10-07 05:25:53.637050] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=2bb08d82c658c648 00:06:49.723 passed 00:06:49.723 Test: set_md_interleave_iovs_test ...passed 00:06:49.723 Test: set_md_interleave_iovs_split_test ...passed 00:06:49.723 Test: dif_generate_stream_pi_16_test ...passed 00:06:49.723 Test: dif_generate_stream_test ...passed 00:06:49.723 Test: set_md_interleave_iovs_alignment_test ...[2024-10-07 05:25:53.642457] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:06:49.723 passed 00:06:49.723 Test: dif_generate_split_test ...passed 00:06:49.723 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:06:49.723 Test: dif_verify_split_test ...passed 00:06:49.723 Test: dif_verify_stream_multi_segments_test ...passed 00:06:49.723 Test: update_crc32c_pi_16_test ...passed 00:06:49.723 Test: update_crc32c_test ...passed 00:06:49.723 Test: dif_update_crc32c_split_test ...passed 00:06:49.723 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:06:49.723 Test: get_range_with_md_test ...passed 00:06:49.723 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:06:49.723 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:06:49.723 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:06:49.723 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:06:49.723 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:06:49.723 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:06:49.723 Test: dif_generate_and_verify_unmap_test ...passed 00:06:49.723 00:06:49.723 Run Summary: Type Total Ran Passed Failed Inactive 00:06:49.723 suites 1 1 n/a 0 0 00:06:49.723 tests 79 79 79 0 0 00:06:49.723 asserts 3584 3584 3584 0 n/a 00:06:49.723 00:06:49.723 Elapsed time = 0.305 seconds 00:06:49.983 05:25:53 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:06:49.983 00:06:49.983 00:06:49.983 CUnit - A unit testing framework for C - Version 2.1-3 00:06:49.983 http://cunit.sourceforge.net/ 00:06:49.983 00:06:49.983 00:06:49.983 Suite: iov 00:06:49.983 Test: test_single_iov ...passed 00:06:49.983 Test: test_simple_iov ...passed 00:06:49.983 Test: test_complex_iov ...passed 00:06:49.983 Test: test_iovs_to_buf ...passed 00:06:49.983 Test: test_buf_to_iovs ...passed 00:06:49.983 Test: test_memset ...passed 00:06:49.983 Test: test_iov_one ...passed 00:06:49.983 Test: test_iov_xfer ...passed 00:06:49.983 00:06:49.983 Run Summary: Type Total Ran Passed Failed Inactive 00:06:49.983 suites 1 1 n/a 0 0 00:06:49.983 tests 8 8 8 0 0 00:06:49.983 asserts 156 156 156 0 n/a 00:06:49.983 00:06:49.983 Elapsed time = 0.000 seconds 00:06:49.983 05:25:53 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:06:49.983 00:06:49.983 00:06:49.983 CUnit - A unit testing framework for C - Version 2.1-3 00:06:49.983 http://cunit.sourceforge.net/ 00:06:49.983 00:06:49.983 00:06:49.983 Suite: math 00:06:49.983 Test: test_serial_number_arithmetic ...passed 00:06:49.983 Suite: erase 00:06:49.983 Test: test_memset_s ...passed 00:06:49.983 00:06:49.983 Run Summary: Type Total Ran Passed Failed Inactive 00:06:49.983 suites 2 2 n/a 0 0 00:06:49.983 tests 2 2 2 0 0 00:06:49.983 asserts 18 18 18 0 n/a 00:06:49.983 00:06:49.983 Elapsed time = 0.000 seconds 00:06:49.983 05:25:53 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:06:49.983 00:06:49.983 00:06:49.983 CUnit - A unit testing framework for C - Version 2.1-3 00:06:49.983 http://cunit.sourceforge.net/ 00:06:49.983 00:06:49.983 00:06:49.983 Suite: pipe 00:06:49.983 Test: test_create_destroy ...passed 00:06:49.983 Test: test_write_get_buffer ...passed 00:06:49.983 Test: test_write_advance ...passed 00:06:49.983 Test: test_read_get_buffer ...passed 00:06:49.983 Test: test_read_advance ...passed 00:06:49.983 Test: test_data ...passed 00:06:49.983 00:06:49.983 Run Summary: Type Total Ran Passed Failed Inactive 00:06:49.983 suites 1 1 n/a 0 0 00:06:49.983 tests 6 6 6 0 0 00:06:49.983 asserts 250 250 250 0 n/a 00:06:49.983 00:06:49.983 Elapsed time = 0.000 seconds 00:06:49.983 05:25:53 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:06:49.983 00:06:49.983 00:06:49.983 CUnit - A unit testing framework for C - Version 2.1-3 00:06:49.983 http://cunit.sourceforge.net/ 00:06:49.983 00:06:49.983 00:06:49.983 Suite: xor 00:06:49.983 Test: test_xor_gen ...passed 00:06:49.983 00:06:49.983 Run Summary: Type Total Ran Passed Failed Inactive 00:06:49.983 suites 1 1 n/a 0 0 00:06:49.983 tests 1 1 1 0 0 00:06:49.983 asserts 17 17 17 0 n/a 00:06:49.983 00:06:49.983 Elapsed time = 0.005 seconds 00:06:49.983 00:06:49.983 real 0m0.682s 00:06:49.983 user 0m0.441s 00:06:49.983 sys 0m0.219s 00:06:49.983 05:25:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.983 ************************************ 00:06:49.983 END TEST unittest_util 00:06:49.983 ************************************ 00:06:49.983 05:25:53 -- common/autotest_common.sh@10 -- # set +x 00:06:49.983 05:25:53 -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:49.983 05:25:53 -- unit/unittest.sh@283 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:06:49.983 05:25:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:49.983 05:25:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.983 05:25:53 -- common/autotest_common.sh@10 -- # set +x 00:06:49.983 ************************************ 00:06:49.983 START TEST unittest_vhost 00:06:49.983 ************************************ 00:06:49.983 05:25:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:06:49.983 00:06:49.983 00:06:49.983 CUnit - A unit testing framework for C - Version 2.1-3 00:06:49.983 http://cunit.sourceforge.net/ 00:06:49.983 00:06:49.983 00:06:49.983 Suite: vhost_suite 00:06:49.983 Test: desc_to_iov_test ...[2024-10-07 05:25:53.882053] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 647:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:06:49.983 passed 00:06:49.983 Test: create_controller_test ...[2024-10-07 05:25:53.890823] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:06:49.983 [2024-10-07 05:25:53.891957] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:06:49.983 [2024-10-07 05:25:53.892892] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:06:49.983 [2024-10-07 05:25:53.893514] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:06:49.983 [2024-10-07 05:25:53.893650] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:06:49.983 [2024-10-07 05:25:53.895527] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1798:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx[2024-10-07 05:25:53.901946] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:06:49.983 passed 00:06:49.983 Test: session_find_by_vid_test ...passed 00:06:49.983 Test: remove_controller_test ...[2024-10-07 05:25:53.913606] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1883:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:06:49.983 passed 00:06:49.983 Test: vq_avail_ring_get_test ...passed 00:06:49.983 Test: vq_packed_ring_test ...passed 00:06:49.983 Test: vhost_blk_construct_test ...passed 00:06:49.983 00:06:49.983 Run Summary: Type Total Ran Passed Failed Inactive 00:06:49.983 suites 1 1 n/a 0 0 00:06:49.983 tests 7 7 7 0 0 00:06:49.983 asserts 145 145 145 0 n/a 00:06:49.983 00:06:49.983 Elapsed time = 0.041 seconds 00:06:49.983 00:06:49.983 real 0m0.090s 00:06:49.983 user 0m0.046s 00:06:49.983 sys 0m0.044s 00:06:49.983 05:25:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.983 ************************************ 00:06:49.983 END TEST unittest_vhost 00:06:49.983 ************************************ 00:06:49.983 05:25:53 -- common/autotest_common.sh@10 -- # set +x 00:06:50.243 05:25:54 -- unit/unittest.sh@285 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:06:50.243 05:25:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:50.243 05:25:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:50.243 05:25:54 -- common/autotest_common.sh@10 -- # set +x 00:06:50.243 ************************************ 00:06:50.243 START TEST unittest_dma 00:06:50.243 ************************************ 00:06:50.243 05:25:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:06:50.243 00:06:50.243 00:06:50.243 CUnit - A unit testing framework for C - Version 2.1-3 00:06:50.243 http://cunit.sourceforge.net/ 00:06:50.243 00:06:50.243 00:06:50.243 Suite: dma_suite 00:06:50.243 Test: test_dma ...[2024-10-07 05:25:54.030958] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:06:50.243 passed 00:06:50.243 00:06:50.243 Run Summary: Type Total Ran Passed Failed Inactive 00:06:50.243 suites 1 1 n/a 0 0 00:06:50.243 tests 1 1 1 0 0 00:06:50.243 asserts 50 50 50 0 n/a 00:06:50.243 00:06:50.243 Elapsed time = 0.001 seconds 00:06:50.243 00:06:50.243 real 0m0.031s 00:06:50.243 user 0m0.019s 00:06:50.243 sys 0m0.012s 00:06:50.243 05:25:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.243 05:25:54 -- common/autotest_common.sh@10 -- # set +x 00:06:50.243 ************************************ 00:06:50.243 END TEST unittest_dma 00:06:50.243 ************************************ 00:06:50.243 05:25:54 -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:06:50.243 05:25:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:50.243 05:25:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:50.243 05:25:54 -- common/autotest_common.sh@10 -- # set +x 00:06:50.243 ************************************ 00:06:50.243 START TEST unittest_init 00:06:50.243 ************************************ 00:06:50.243 05:25:54 -- common/autotest_common.sh@1104 -- # unittest_init 00:06:50.244 05:25:54 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:06:50.244 00:06:50.244 00:06:50.244 CUnit - A unit testing framework for C - Version 2.1-3 00:06:50.244 http://cunit.sourceforge.net/ 00:06:50.244 00:06:50.244 00:06:50.244 Suite: subsystem_suite 00:06:50.244 Test: subsystem_sort_test_depends_on_single ...passed 00:06:50.244 Test: subsystem_sort_test_depends_on_multiple ...passed 00:06:50.244 Test: subsystem_sort_test_missing_dependency ...[2024-10-07 05:25:54.133929] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 190:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:06:50.244 passed 00:06:50.244 00:06:50.244 [2024-10-07 05:25:54.134276] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:06:50.244 Run Summary: Type Total Ran Passed Failed Inactive 00:06:50.244 suites 1 1 n/a 0 0 00:06:50.244 tests 3 3 3 0 0 00:06:50.244 asserts 20 20 20 0 n/a 00:06:50.244 00:06:50.244 Elapsed time = 0.001 seconds 00:06:50.244 00:06:50.244 real 0m0.034s 00:06:50.244 user 0m0.023s 00:06:50.244 sys 0m0.012s 00:06:50.244 05:25:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.244 05:25:54 -- common/autotest_common.sh@10 -- # set +x 00:06:50.244 ************************************ 00:06:50.244 END TEST unittest_init 00:06:50.244 ************************************ 00:06:50.244 05:25:54 -- unit/unittest.sh@289 -- # '[' yes = yes ']' 00:06:50.244 05:25:54 -- unit/unittest.sh@289 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:06:50.244 05:25:54 -- unit/unittest.sh@290 -- # hostname 00:06:50.244 05:25:54 -- unit/unittest.sh@290 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:06:50.503 geninfo: WARNING: invalid characters removed from testname! 00:07:17.046 05:26:20 -- unit/unittest.sh@291 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:07:21.239 05:26:24 -- unit/unittest.sh@292 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:23.773 05:26:27 -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:27.056 05:26:30 -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:29.678 05:26:33 -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:32.208 05:26:36 -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:34.741 05:26:38 -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:36.646 05:26:40 -- unit/unittest.sh@298 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:07:36.646 05:26:40 -- unit/unittest.sh@299 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:07:37.213 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:37.213 Found 309 entries. 00:07:37.213 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:07:37.213 Writing .css and .png files. 00:07:37.213 Generating output. 00:07:37.472 Processing file include/linux/virtio_ring.h 00:07:37.731 Processing file include/spdk/mmio.h 00:07:37.731 Processing file include/spdk/util.h 00:07:37.731 Processing file include/spdk/nvme_spec.h 00:07:37.731 Processing file include/spdk/bdev_module.h 00:07:37.731 Processing file include/spdk/endian.h 00:07:37.731 Processing file include/spdk/nvme.h 00:07:37.731 Processing file include/spdk/thread.h 00:07:37.731 Processing file include/spdk/histogram_data.h 00:07:37.731 Processing file include/spdk/trace.h 00:07:37.731 Processing file include/spdk/nvmf_transport.h 00:07:37.731 Processing file include/spdk/base64.h 00:07:37.731 Processing file include/spdk_internal/nvme_tcp.h 00:07:37.731 Processing file include/spdk_internal/virtio.h 00:07:37.731 Processing file include/spdk_internal/sgl.h 00:07:37.731 Processing file include/spdk_internal/sock.h 00:07:37.731 Processing file include/spdk_internal/rdma.h 00:07:37.731 Processing file include/spdk_internal/utf.h 00:07:37.991 Processing file lib/accel/accel_sw.c 00:07:37.991 Processing file lib/accel/accel.c 00:07:37.991 Processing file lib/accel/accel_rpc.c 00:07:38.250 Processing file lib/bdev/bdev.c 00:07:38.250 Processing file lib/bdev/part.c 00:07:38.250 Processing file lib/bdev/bdev_zone.c 00:07:38.250 Processing file lib/bdev/bdev_rpc.c 00:07:38.250 Processing file lib/bdev/scsi_nvme.c 00:07:38.509 Processing file lib/blob/blob_bs_dev.c 00:07:38.509 Processing file lib/blob/zeroes.c 00:07:38.509 Processing file lib/blob/blobstore.h 00:07:38.509 Processing file lib/blob/request.c 00:07:38.509 Processing file lib/blob/blobstore.c 00:07:38.509 Processing file lib/blobfs/tree.c 00:07:38.509 Processing file lib/blobfs/blobfs.c 00:07:38.509 Processing file lib/conf/conf.c 00:07:38.767 Processing file lib/dma/dma.c 00:07:38.768 Processing file lib/env_dpdk/threads.c 00:07:38.768 Processing file lib/env_dpdk/init.c 00:07:38.768 Processing file lib/env_dpdk/env.c 00:07:38.768 Processing file lib/env_dpdk/pci_ioat.c 00:07:38.768 Processing file lib/env_dpdk/pci_idxd.c 00:07:38.768 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:07:38.768 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:07:38.768 Processing file lib/env_dpdk/pci_event.c 00:07:38.768 Processing file lib/env_dpdk/pci_vmd.c 00:07:38.768 Processing file lib/env_dpdk/pci_virtio.c 00:07:38.768 Processing file lib/env_dpdk/pci.c 00:07:38.768 Processing file lib/env_dpdk/memory.c 00:07:38.768 Processing file lib/env_dpdk/sigbus_handler.c 00:07:38.768 Processing file lib/env_dpdk/pci_dpdk.c 00:07:39.026 Processing file lib/event/scheduler_static.c 00:07:39.026 Processing file lib/event/app_rpc.c 00:07:39.026 Processing file lib/event/log_rpc.c 00:07:39.026 Processing file lib/event/reactor.c 00:07:39.026 Processing file lib/event/app.c 00:07:39.594 Processing file lib/ftl/ftl_nv_cache_io.h 00:07:39.594 Processing file lib/ftl/ftl_band.c 00:07:39.594 Processing file lib/ftl/ftl_p2l.c 00:07:39.594 Processing file lib/ftl/ftl_writer.h 00:07:39.594 Processing file lib/ftl/ftl_writer.c 00:07:39.594 Processing file lib/ftl/ftl_band.h 00:07:39.594 Processing file lib/ftl/ftl_l2p.c 00:07:39.594 Processing file lib/ftl/ftl_init.c 00:07:39.594 Processing file lib/ftl/ftl_debug.h 00:07:39.594 Processing file lib/ftl/ftl_io.h 00:07:39.594 Processing file lib/ftl/ftl_io.c 00:07:39.594 Processing file lib/ftl/ftl_trace.c 00:07:39.594 Processing file lib/ftl/ftl_layout.c 00:07:39.594 Processing file lib/ftl/ftl_sb.c 00:07:39.594 Processing file lib/ftl/ftl_l2p_cache.c 00:07:39.594 Processing file lib/ftl/ftl_debug.c 00:07:39.594 Processing file lib/ftl/ftl_nv_cache.c 00:07:39.594 Processing file lib/ftl/ftl_reloc.c 00:07:39.594 Processing file lib/ftl/ftl_band_ops.c 00:07:39.594 Processing file lib/ftl/ftl_nv_cache.h 00:07:39.594 Processing file lib/ftl/ftl_core.h 00:07:39.594 Processing file lib/ftl/ftl_rq.c 00:07:39.594 Processing file lib/ftl/ftl_l2p_flat.c 00:07:39.594 Processing file lib/ftl/ftl_core.c 00:07:39.594 Processing file lib/ftl/base/ftl_base_dev.c 00:07:39.594 Processing file lib/ftl/base/ftl_base_bdev.c 00:07:39.853 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:07:39.853 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:07:39.853 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:07:39.853 Processing file lib/ftl/mngt/ftl_mngt.c 00:07:39.853 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:07:39.853 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:07:39.853 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:07:39.853 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:07:39.853 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:07:39.853 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:07:39.853 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:07:39.854 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:07:39.854 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:07:39.854 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:07:39.854 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:07:40.112 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:07:40.112 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:07:40.112 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:07:40.112 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:07:40.112 Processing file lib/ftl/utils/ftl_bitmap.c 00:07:40.112 Processing file lib/ftl/utils/ftl_df.h 00:07:40.112 Processing file lib/ftl/utils/ftl_property.c 00:07:40.112 Processing file lib/ftl/utils/ftl_addr_utils.h 00:07:40.112 Processing file lib/ftl/utils/ftl_conf.c 00:07:40.112 Processing file lib/ftl/utils/ftl_md.c 00:07:40.112 Processing file lib/ftl/utils/ftl_mempool.c 00:07:40.112 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:07:40.112 Processing file lib/ftl/utils/ftl_property.h 00:07:40.371 Processing file lib/idxd/idxd_internal.h 00:07:40.371 Processing file lib/idxd/idxd_user.c 00:07:40.371 Processing file lib/idxd/idxd.c 00:07:40.371 Processing file lib/init/subsystem_rpc.c 00:07:40.371 Processing file lib/init/subsystem.c 00:07:40.371 Processing file lib/init/rpc.c 00:07:40.371 Processing file lib/init/json_config.c 00:07:40.371 Processing file lib/ioat/ioat.c 00:07:40.371 Processing file lib/ioat/ioat_internal.h 00:07:40.939 Processing file lib/iscsi/iscsi_rpc.c 00:07:40.939 Processing file lib/iscsi/md5.c 00:07:40.939 Processing file lib/iscsi/iscsi.h 00:07:40.939 Processing file lib/iscsi/task.h 00:07:40.939 Processing file lib/iscsi/param.c 00:07:40.939 Processing file lib/iscsi/task.c 00:07:40.939 Processing file lib/iscsi/conn.c 00:07:40.939 Processing file lib/iscsi/tgt_node.c 00:07:40.939 Processing file lib/iscsi/init_grp.c 00:07:40.939 Processing file lib/iscsi/portal_grp.c 00:07:40.939 Processing file lib/iscsi/iscsi_subsystem.c 00:07:40.939 Processing file lib/iscsi/iscsi.c 00:07:40.939 Processing file lib/json/json_parse.c 00:07:40.939 Processing file lib/json/json_write.c 00:07:40.939 Processing file lib/json/json_util.c 00:07:40.939 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:07:40.939 Processing file lib/jsonrpc/jsonrpc_server.c 00:07:40.939 Processing file lib/jsonrpc/jsonrpc_client.c 00:07:40.939 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:07:41.198 Processing file lib/log/log_deprecated.c 00:07:41.198 Processing file lib/log/log.c 00:07:41.198 Processing file lib/log/log_flags.c 00:07:41.198 Processing file lib/lvol/lvol.c 00:07:41.198 Processing file lib/nbd/nbd_rpc.c 00:07:41.198 Processing file lib/nbd/nbd.c 00:07:41.457 Processing file lib/notify/notify.c 00:07:41.457 Processing file lib/notify/notify_rpc.c 00:07:42.025 Processing file lib/nvme/nvme_cuse.c 00:07:42.025 Processing file lib/nvme/nvme_rdma.c 00:07:42.025 Processing file lib/nvme/nvme_qpair.c 00:07:42.025 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:07:42.025 Processing file lib/nvme/nvme_poll_group.c 00:07:42.025 Processing file lib/nvme/nvme_pcie.c 00:07:42.025 Processing file lib/nvme/nvme_zns.c 00:07:42.025 Processing file lib/nvme/nvme_opal.c 00:07:42.025 Processing file lib/nvme/nvme_io_msg.c 00:07:42.025 Processing file lib/nvme/nvme_pcie_internal.h 00:07:42.025 Processing file lib/nvme/nvme_ns.c 00:07:42.025 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:07:42.025 Processing file lib/nvme/nvme.c 00:07:42.025 Processing file lib/nvme/nvme_internal.h 00:07:42.025 Processing file lib/nvme/nvme_fabric.c 00:07:42.025 Processing file lib/nvme/nvme_quirks.c 00:07:42.025 Processing file lib/nvme/nvme_vfio_user.c 00:07:42.025 Processing file lib/nvme/nvme_ctrlr.c 00:07:42.025 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:07:42.025 Processing file lib/nvme/nvme_tcp.c 00:07:42.025 Processing file lib/nvme/nvme_discovery.c 00:07:42.025 Processing file lib/nvme/nvme_ns_cmd.c 00:07:42.025 Processing file lib/nvme/nvme_transport.c 00:07:42.025 Processing file lib/nvme/nvme_pcie_common.c 00:07:42.626 Processing file lib/nvmf/ctrlr_discovery.c 00:07:42.626 Processing file lib/nvmf/rdma.c 00:07:42.626 Processing file lib/nvmf/nvmf.c 00:07:42.626 Processing file lib/nvmf/subsystem.c 00:07:42.626 Processing file lib/nvmf/tcp.c 00:07:42.626 Processing file lib/nvmf/ctrlr.c 00:07:42.626 Processing file lib/nvmf/nvmf_rpc.c 00:07:42.626 Processing file lib/nvmf/transport.c 00:07:42.626 Processing file lib/nvmf/nvmf_internal.h 00:07:42.626 Processing file lib/nvmf/ctrlr_bdev.c 00:07:42.626 Processing file lib/rdma/common.c 00:07:42.626 Processing file lib/rdma/rdma_verbs.c 00:07:42.626 Processing file lib/rpc/rpc.c 00:07:42.885 Processing file lib/scsi/scsi.c 00:07:42.885 Processing file lib/scsi/lun.c 00:07:42.885 Processing file lib/scsi/dev.c 00:07:42.885 Processing file lib/scsi/task.c 00:07:42.885 Processing file lib/scsi/port.c 00:07:42.885 Processing file lib/scsi/scsi_bdev.c 00:07:42.885 Processing file lib/scsi/scsi_pr.c 00:07:42.885 Processing file lib/scsi/scsi_rpc.c 00:07:42.885 Processing file lib/sock/sock_rpc.c 00:07:42.885 Processing file lib/sock/sock.c 00:07:43.144 Processing file lib/thread/thread.c 00:07:43.144 Processing file lib/thread/iobuf.c 00:07:43.144 Processing file lib/trace/trace_rpc.c 00:07:43.144 Processing file lib/trace/trace.c 00:07:43.144 Processing file lib/trace/trace_flags.c 00:07:43.144 Processing file lib/trace_parser/trace.cpp 00:07:43.402 Processing file lib/ut/ut.c 00:07:43.402 Processing file lib/ut_mock/mock.c 00:07:43.660 Processing file lib/util/string.c 00:07:43.660 Processing file lib/util/file.c 00:07:43.660 Processing file lib/util/bit_array.c 00:07:43.660 Processing file lib/util/crc32_ieee.c 00:07:43.660 Processing file lib/util/crc64.c 00:07:43.660 Processing file lib/util/crc32c.c 00:07:43.660 Processing file lib/util/zipf.c 00:07:43.660 Processing file lib/util/base64.c 00:07:43.660 Processing file lib/util/crc16.c 00:07:43.660 Processing file lib/util/strerror_tls.c 00:07:43.660 Processing file lib/util/uuid.c 00:07:43.660 Processing file lib/util/math.c 00:07:43.660 Processing file lib/util/iov.c 00:07:43.660 Processing file lib/util/xor.c 00:07:43.660 Processing file lib/util/fd.c 00:07:43.660 Processing file lib/util/fd_group.c 00:07:43.660 Processing file lib/util/crc32.c 00:07:43.660 Processing file lib/util/dif.c 00:07:43.660 Processing file lib/util/cpuset.c 00:07:43.660 Processing file lib/util/pipe.c 00:07:43.660 Processing file lib/util/hexlify.c 00:07:43.660 Processing file lib/vfio_user/host/vfio_user.c 00:07:43.660 Processing file lib/vfio_user/host/vfio_user_pci.c 00:07:43.918 Processing file lib/vhost/rte_vhost_user.c 00:07:43.918 Processing file lib/vhost/vhost.c 00:07:43.918 Processing file lib/vhost/vhost_scsi.c 00:07:43.918 Processing file lib/vhost/vhost_blk.c 00:07:43.918 Processing file lib/vhost/vhost_rpc.c 00:07:43.918 Processing file lib/vhost/vhost_internal.h 00:07:44.177 Processing file lib/virtio/virtio_vhost_user.c 00:07:44.177 Processing file lib/virtio/virtio_vfio_user.c 00:07:44.177 Processing file lib/virtio/virtio_pci.c 00:07:44.177 Processing file lib/virtio/virtio.c 00:07:44.177 Processing file lib/vmd/vmd.c 00:07:44.177 Processing file lib/vmd/led.c 00:07:44.434 Processing file module/accel/dsa/accel_dsa.c 00:07:44.434 Processing file module/accel/dsa/accel_dsa_rpc.c 00:07:44.434 Processing file module/accel/error/accel_error_rpc.c 00:07:44.434 Processing file module/accel/error/accel_error.c 00:07:44.434 Processing file module/accel/iaa/accel_iaa.c 00:07:44.434 Processing file module/accel/iaa/accel_iaa_rpc.c 00:07:44.692 Processing file module/accel/ioat/accel_ioat_rpc.c 00:07:44.692 Processing file module/accel/ioat/accel_ioat.c 00:07:44.692 Processing file module/bdev/aio/bdev_aio.c 00:07:44.692 Processing file module/bdev/aio/bdev_aio_rpc.c 00:07:44.692 Processing file module/bdev/delay/vbdev_delay.c 00:07:44.692 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:07:44.950 Processing file module/bdev/error/vbdev_error_rpc.c 00:07:44.950 Processing file module/bdev/error/vbdev_error.c 00:07:44.950 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:07:44.950 Processing file module/bdev/ftl/bdev_ftl.c 00:07:45.207 Processing file module/bdev/gpt/gpt.h 00:07:45.207 Processing file module/bdev/gpt/gpt.c 00:07:45.207 Processing file module/bdev/gpt/vbdev_gpt.c 00:07:45.207 Processing file module/bdev/iscsi/bdev_iscsi.c 00:07:45.207 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:07:45.207 Processing file module/bdev/lvol/vbdev_lvol.c 00:07:45.207 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:07:45.465 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:07:45.465 Processing file module/bdev/malloc/bdev_malloc.c 00:07:45.465 Processing file module/bdev/null/bdev_null.c 00:07:45.465 Processing file module/bdev/null/bdev_null_rpc.c 00:07:45.723 Processing file module/bdev/nvme/bdev_mdns_client.c 00:07:45.723 Processing file module/bdev/nvme/bdev_nvme.c 00:07:45.723 Processing file module/bdev/nvme/nvme_rpc.c 00:07:45.723 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:07:45.723 Processing file module/bdev/nvme/vbdev_opal.c 00:07:45.723 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:07:45.723 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:07:45.980 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:07:45.980 Processing file module/bdev/passthru/vbdev_passthru.c 00:07:46.238 Processing file module/bdev/raid/raid0.c 00:07:46.238 Processing file module/bdev/raid/bdev_raid_sb.c 00:07:46.238 Processing file module/bdev/raid/bdev_raid.h 00:07:46.238 Processing file module/bdev/raid/bdev_raid.c 00:07:46.238 Processing file module/bdev/raid/raid1.c 00:07:46.238 Processing file module/bdev/raid/concat.c 00:07:46.238 Processing file module/bdev/raid/bdev_raid_rpc.c 00:07:46.238 Processing file module/bdev/raid/raid5f.c 00:07:46.238 Processing file module/bdev/split/vbdev_split.c 00:07:46.238 Processing file module/bdev/split/vbdev_split_rpc.c 00:07:46.238 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:07:46.238 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:07:46.238 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:07:46.495 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:07:46.495 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:07:46.495 Processing file module/blob/bdev/blob_bdev.c 00:07:46.752 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:07:46.752 Processing file module/blobfs/bdev/blobfs_bdev.c 00:07:46.752 Processing file module/env_dpdk/env_dpdk_rpc.c 00:07:46.752 Processing file module/event/subsystems/accel/accel.c 00:07:46.752 Processing file module/event/subsystems/bdev/bdev.c 00:07:47.010 Processing file module/event/subsystems/iobuf/iobuf.c 00:07:47.010 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:07:47.010 Processing file module/event/subsystems/iscsi/iscsi.c 00:07:47.010 Processing file module/event/subsystems/nbd/nbd.c 00:07:47.269 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:07:47.269 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:07:47.269 Processing file module/event/subsystems/scheduler/scheduler.c 00:07:47.269 Processing file module/event/subsystems/scsi/scsi.c 00:07:47.527 Processing file module/event/subsystems/sock/sock.c 00:07:47.527 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:07:47.527 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:07:47.527 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:07:47.527 Processing file module/event/subsystems/vmd/vmd.c 00:07:47.785 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:07:47.785 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:07:47.785 Processing file module/scheduler/gscheduler/gscheduler.c 00:07:47.785 Processing file module/sock/sock_kernel.h 00:07:48.043 Processing file module/sock/posix/posix.c 00:07:48.044 Writing directory view page. 00:07:48.044 Overall coverage rate: 00:07:48.044 lines......: 39.1% (39266 of 100422 lines) 00:07:48.044 functions..: 42.8% (3587 of 8384 functions) 00:07:48.044 00:07:48.044 00:07:48.044 ===================== 00:07:48.044 All unit tests passed 00:07:48.044 ===================== 00:07:48.044 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:07:48.044 05:26:51 -- unit/unittest.sh@302 -- # set +x 00:07:48.044 00:07:48.044 00:07:48.044 00:07:48.044 real 2m50.377s 00:07:48.044 user 2m25.638s 00:07:48.044 sys 0m14.417s 00:07:48.044 05:26:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.044 ************************************ 00:07:48.044 END TEST unittest 00:07:48.044 ************************************ 00:07:48.044 05:26:51 -- common/autotest_common.sh@10 -- # set +x 00:07:48.044 05:26:51 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:07:48.044 05:26:51 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:07:48.044 05:26:51 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:07:48.044 05:26:51 -- spdk/autotest.sh@173 -- # timing_enter lib 00:07:48.044 05:26:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:48.044 05:26:51 -- common/autotest_common.sh@10 -- # set +x 00:07:48.044 05:26:51 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:48.044 05:26:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:48.044 05:26:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:48.044 05:26:51 -- common/autotest_common.sh@10 -- # set +x 00:07:48.044 ************************************ 00:07:48.044 START TEST env 00:07:48.044 ************************************ 00:07:48.044 05:26:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:48.044 * Looking for test storage... 00:07:48.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:48.044 05:26:51 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:48.044 05:26:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:48.044 05:26:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:48.044 05:26:51 -- common/autotest_common.sh@10 -- # set +x 00:07:48.044 ************************************ 00:07:48.044 START TEST env_memory 00:07:48.044 ************************************ 00:07:48.044 05:26:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:48.044 00:07:48.044 00:07:48.044 CUnit - A unit testing framework for C - Version 2.1-3 00:07:48.044 http://cunit.sourceforge.net/ 00:07:48.044 00:07:48.044 00:07:48.044 Suite: memory 00:07:48.044 Test: alloc and free memory map ...[2024-10-07 05:26:52.002392] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:48.302 passed 00:07:48.302 Test: mem map translation ...[2024-10-07 05:26:52.038196] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:48.302 [2024-10-07 05:26:52.038290] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:48.302 [2024-10-07 05:26:52.038391] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:48.302 [2024-10-07 05:26:52.038454] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:48.302 passed 00:07:48.302 Test: mem map registration ...[2024-10-07 05:26:52.099209] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:07:48.302 [2024-10-07 05:26:52.099295] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:07:48.302 passed 00:07:48.302 Test: mem map adjacent registrations ...passed 00:07:48.302 00:07:48.302 Run Summary: Type Total Ran Passed Failed Inactive 00:07:48.302 suites 1 1 n/a 0 0 00:07:48.302 tests 4 4 4 0 0 00:07:48.302 asserts 152 152 152 0 n/a 00:07:48.302 00:07:48.302 Elapsed time = 0.213 seconds 00:07:48.302 00:07:48.302 real 0m0.241s 00:07:48.302 user 0m0.211s 00:07:48.302 sys 0m0.029s 00:07:48.302 05:26:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.302 ************************************ 00:07:48.302 END TEST env_memory 00:07:48.302 ************************************ 00:07:48.302 05:26:52 -- common/autotest_common.sh@10 -- # set +x 00:07:48.302 05:26:52 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:48.302 05:26:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:48.302 05:26:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:48.302 05:26:52 -- common/autotest_common.sh@10 -- # set +x 00:07:48.302 ************************************ 00:07:48.302 START TEST env_vtophys 00:07:48.302 ************************************ 00:07:48.302 05:26:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:48.561 EAL: lib.eal log level changed from notice to debug 00:07:48.561 EAL: Detected lcore 0 as core 0 on socket 0 00:07:48.561 EAL: Detected lcore 1 as core 0 on socket 0 00:07:48.561 EAL: Detected lcore 2 as core 0 on socket 0 00:07:48.561 EAL: Detected lcore 3 as core 0 on socket 0 00:07:48.561 EAL: Detected lcore 4 as core 0 on socket 0 00:07:48.561 EAL: Detected lcore 5 as core 0 on socket 0 00:07:48.561 EAL: Detected lcore 6 as core 0 on socket 0 00:07:48.561 EAL: Detected lcore 7 as core 0 on socket 0 00:07:48.561 EAL: Detected lcore 8 as core 0 on socket 0 00:07:48.561 EAL: Detected lcore 9 as core 0 on socket 0 00:07:48.561 EAL: Maximum logical cores by configuration: 128 00:07:48.561 EAL: Detected CPU lcores: 10 00:07:48.561 EAL: Detected NUMA nodes: 1 00:07:48.561 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:07:48.561 EAL: Checking presence of .so 'librte_eal.so.24' 00:07:48.561 EAL: Checking presence of .so 'librte_eal.so' 00:07:48.561 EAL: Detected static linkage of DPDK 00:07:48.561 EAL: No shared files mode enabled, IPC will be disabled 00:07:48.561 EAL: Selected IOVA mode 'PA' 00:07:48.561 EAL: Probing VFIO support... 00:07:48.561 EAL: IOMMU type 1 (Type 1) is supported 00:07:48.561 EAL: IOMMU type 7 (sPAPR) is not supported 00:07:48.561 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:07:48.561 EAL: VFIO support initialized 00:07:48.561 EAL: Ask a virtual area of 0x2e000 bytes 00:07:48.561 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:48.561 EAL: Setting up physically contiguous memory... 00:07:48.561 EAL: Setting maximum number of open files to 1048576 00:07:48.561 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:48.561 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:48.561 EAL: Ask a virtual area of 0x61000 bytes 00:07:48.561 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:48.561 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:48.561 EAL: Ask a virtual area of 0x400000000 bytes 00:07:48.561 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:48.561 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:48.561 EAL: Ask a virtual area of 0x61000 bytes 00:07:48.561 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:48.561 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:48.561 EAL: Ask a virtual area of 0x400000000 bytes 00:07:48.561 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:48.561 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:48.561 EAL: Ask a virtual area of 0x61000 bytes 00:07:48.561 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:48.561 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:48.561 EAL: Ask a virtual area of 0x400000000 bytes 00:07:48.561 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:48.561 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:48.561 EAL: Ask a virtual area of 0x61000 bytes 00:07:48.561 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:48.561 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:48.561 EAL: Ask a virtual area of 0x400000000 bytes 00:07:48.561 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:48.561 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:48.561 EAL: Hugepages will be freed exactly as allocated. 00:07:48.561 EAL: No shared files mode enabled, IPC is disabled 00:07:48.562 EAL: No shared files mode enabled, IPC is disabled 00:07:48.562 EAL: TSC frequency is ~2200000 KHz 00:07:48.562 EAL: Main lcore 0 is ready (tid=7f0af9ef0a80;cpuset=[0]) 00:07:48.562 EAL: Trying to obtain current memory policy. 00:07:48.562 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:48.562 EAL: Restoring previous memory policy: 0 00:07:48.562 EAL: request: mp_malloc_sync 00:07:48.562 EAL: No shared files mode enabled, IPC is disabled 00:07:48.562 EAL: Heap on socket 0 was expanded by 2MB 00:07:48.562 EAL: No shared files mode enabled, IPC is disabled 00:07:48.562 EAL: Mem event callback 'spdk:(nil)' registered 00:07:48.562 00:07:48.562 00:07:48.562 CUnit - A unit testing framework for C - Version 2.1-3 00:07:48.562 http://cunit.sourceforge.net/ 00:07:48.562 00:07:48.562 00:07:48.562 Suite: components_suite 00:07:49.130 Test: vtophys_malloc_test ...passed 00:07:49.130 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:49.130 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:49.130 EAL: Restoring previous memory policy: 0 00:07:49.130 EAL: Calling mem event callback 'spdk:(nil)' 00:07:49.130 EAL: request: mp_malloc_sync 00:07:49.130 EAL: No shared files mode enabled, IPC is disabled 00:07:49.130 EAL: Heap on socket 0 was expanded by 4MB 00:07:49.130 EAL: Calling mem event callback 'spdk:(nil)' 00:07:49.130 EAL: request: mp_malloc_sync 00:07:49.130 EAL: No shared files mode enabled, IPC is disabled 00:07:49.130 EAL: Heap on socket 0 was shrunk by 4MB 00:07:49.130 EAL: Trying to obtain current memory policy. 00:07:49.130 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:49.130 EAL: Restoring previous memory policy: 0 00:07:49.130 EAL: Calling mem event callback 'spdk:(nil)' 00:07:49.130 EAL: request: mp_malloc_sync 00:07:49.130 EAL: No shared files mode enabled, IPC is disabled 00:07:49.130 EAL: Heap on socket 0 was expanded by 6MB 00:07:49.130 EAL: Calling mem event callback 'spdk:(nil)' 00:07:49.130 EAL: request: mp_malloc_sync 00:07:49.130 EAL: No shared files mode enabled, IPC is disabled 00:07:49.130 EAL: Heap on socket 0 was shrunk by 6MB 00:07:49.130 EAL: Trying to obtain current memory policy. 00:07:49.130 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:49.130 EAL: Restoring previous memory policy: 0 00:07:49.130 EAL: Calling mem event callback 'spdk:(nil)' 00:07:49.130 EAL: request: mp_malloc_sync 00:07:49.130 EAL: No shared files mode enabled, IPC is disabled 00:07:49.130 EAL: Heap on socket 0 was expanded by 10MB 00:07:49.130 EAL: Calling mem event callback 'spdk:(nil)' 00:07:49.130 EAL: request: mp_malloc_sync 00:07:49.130 EAL: No shared files mode enabled, IPC is disabled 00:07:49.130 EAL: Heap on socket 0 was shrunk by 10MB 00:07:49.130 EAL: Trying to obtain current memory policy. 00:07:49.130 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:49.130 EAL: Restoring previous memory policy: 0 00:07:49.130 EAL: Calling mem event callback 'spdk:(nil)' 00:07:49.130 EAL: request: mp_malloc_sync 00:07:49.130 EAL: No shared files mode enabled, IPC is disabled 00:07:49.130 EAL: Heap on socket 0 was expanded by 18MB 00:07:49.130 EAL: Calling mem event callback 'spdk:(nil)' 00:07:49.130 EAL: request: mp_malloc_sync 00:07:49.130 EAL: No shared files mode enabled, IPC is disabled 00:07:49.130 EAL: Heap on socket 0 was shrunk by 18MB 00:07:49.130 EAL: Trying to obtain current memory policy. 00:07:49.130 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:49.130 EAL: Restoring previous memory policy: 0 00:07:49.130 EAL: Calling mem event callback 'spdk:(nil)' 00:07:49.130 EAL: request: mp_malloc_sync 00:07:49.130 EAL: No shared files mode enabled, IPC is disabled 00:07:49.130 EAL: Heap on socket 0 was expanded by 34MB 00:07:49.130 EAL: Calling mem event callback 'spdk:(nil)' 00:07:49.130 EAL: request: mp_malloc_sync 00:07:49.130 EAL: No shared files mode enabled, IPC is disabled 00:07:49.130 EAL: Heap on socket 0 was shrunk by 34MB 00:07:49.130 EAL: Trying to obtain current memory policy. 00:07:49.130 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:49.389 EAL: Restoring previous memory policy: 0 00:07:49.389 EAL: Calling mem event callback 'spdk:(nil)' 00:07:49.389 EAL: request: mp_malloc_sync 00:07:49.389 EAL: No shared files mode enabled, IPC is disabled 00:07:49.389 EAL: Heap on socket 0 was expanded by 66MB 00:07:49.389 EAL: Calling mem event callback 'spdk:(nil)' 00:07:49.389 EAL: request: mp_malloc_sync 00:07:49.389 EAL: No shared files mode enabled, IPC is disabled 00:07:49.389 EAL: Heap on socket 0 was shrunk by 66MB 00:07:49.389 EAL: Trying to obtain current memory policy. 00:07:49.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:49.389 EAL: Restoring previous memory policy: 0 00:07:49.389 EAL: Calling mem event callback 'spdk:(nil)' 00:07:49.389 EAL: request: mp_malloc_sync 00:07:49.389 EAL: No shared files mode enabled, IPC is disabled 00:07:49.389 EAL: Heap on socket 0 was expanded by 130MB 00:07:49.648 EAL: Calling mem event callback 'spdk:(nil)' 00:07:49.648 EAL: request: mp_malloc_sync 00:07:49.648 EAL: No shared files mode enabled, IPC is disabled 00:07:49.648 EAL: Heap on socket 0 was shrunk by 130MB 00:07:49.910 EAL: Trying to obtain current memory policy. 00:07:49.910 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:49.910 EAL: Restoring previous memory policy: 0 00:07:49.910 EAL: Calling mem event callback 'spdk:(nil)' 00:07:49.910 EAL: request: mp_malloc_sync 00:07:49.910 EAL: No shared files mode enabled, IPC is disabled 00:07:49.910 EAL: Heap on socket 0 was expanded by 258MB 00:07:50.479 EAL: Calling mem event callback 'spdk:(nil)' 00:07:50.479 EAL: request: mp_malloc_sync 00:07:50.479 EAL: No shared files mode enabled, IPC is disabled 00:07:50.479 EAL: Heap on socket 0 was shrunk by 258MB 00:07:50.738 EAL: Trying to obtain current memory policy. 00:07:50.738 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:50.997 EAL: Restoring previous memory policy: 0 00:07:50.997 EAL: Calling mem event callback 'spdk:(nil)' 00:07:50.997 EAL: request: mp_malloc_sync 00:07:50.997 EAL: No shared files mode enabled, IPC is disabled 00:07:50.997 EAL: Heap on socket 0 was expanded by 514MB 00:07:51.565 EAL: Calling mem event callback 'spdk:(nil)' 00:07:51.824 EAL: request: mp_malloc_sync 00:07:51.824 EAL: No shared files mode enabled, IPC is disabled 00:07:51.824 EAL: Heap on socket 0 was shrunk by 514MB 00:07:52.392 EAL: Trying to obtain current memory policy. 00:07:52.392 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:52.959 EAL: Restoring previous memory policy: 0 00:07:52.959 EAL: Calling mem event callback 'spdk:(nil)' 00:07:52.959 EAL: request: mp_malloc_sync 00:07:52.959 EAL: No shared files mode enabled, IPC is disabled 00:07:52.959 EAL: Heap on socket 0 was expanded by 1026MB 00:07:54.862 EAL: Calling mem event callback 'spdk:(nil)' 00:07:54.862 EAL: request: mp_malloc_sync 00:07:54.862 EAL: No shared files mode enabled, IPC is disabled 00:07:54.862 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:56.239 passed 00:07:56.239 00:07:56.239 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.239 suites 1 1 n/a 0 0 00:07:56.239 tests 2 2 2 0 0 00:07:56.239 asserts 6370 6370 6370 0 n/a 00:07:56.239 00:07:56.239 Elapsed time = 7.671 seconds 00:07:56.239 EAL: Calling mem event callback 'spdk:(nil)' 00:07:56.239 EAL: request: mp_malloc_sync 00:07:56.239 EAL: No shared files mode enabled, IPC is disabled 00:07:56.239 EAL: Heap on socket 0 was shrunk by 2MB 00:07:56.239 EAL: No shared files mode enabled, IPC is disabled 00:07:56.239 EAL: No shared files mode enabled, IPC is disabled 00:07:56.239 EAL: No shared files mode enabled, IPC is disabled 00:07:56.239 00:07:56.239 real 0m7.970s 00:07:56.239 user 0m6.626s 00:07:56.239 sys 0m1.208s 00:07:56.239 05:27:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.239 ************************************ 00:07:56.239 END TEST env_vtophys 00:07:56.239 ************************************ 00:07:56.239 05:27:00 -- common/autotest_common.sh@10 -- # set +x 00:07:56.498 05:27:00 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:56.498 05:27:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:56.498 05:27:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:56.498 05:27:00 -- common/autotest_common.sh@10 -- # set +x 00:07:56.498 ************************************ 00:07:56.498 START TEST env_pci 00:07:56.498 ************************************ 00:07:56.498 05:27:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:56.498 00:07:56.498 00:07:56.498 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.498 http://cunit.sourceforge.net/ 00:07:56.498 00:07:56.498 00:07:56.498 Suite: pci 00:07:56.498 Test: pci_hook ...[2024-10-07 05:27:00.328341] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 103477 has claimed it 00:07:56.498 passed 00:07:56.498 00:07:56.498 EAL: Cannot find device (10000:00:01.0) 00:07:56.498 EAL: Failed to attach device on primary process 00:07:56.498 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.498 suites 1 1 n/a 0 0 00:07:56.498 tests 1 1 1 0 0 00:07:56.498 asserts 25 25 25 0 n/a 00:07:56.498 00:07:56.498 Elapsed time = 0.006 seconds 00:07:56.498 00:07:56.498 real 0m0.090s 00:07:56.498 user 0m0.052s 00:07:56.498 sys 0m0.038s 00:07:56.498 05:27:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.498 ************************************ 00:07:56.498 05:27:00 -- common/autotest_common.sh@10 -- # set +x 00:07:56.498 END TEST env_pci 00:07:56.498 ************************************ 00:07:56.498 05:27:00 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:56.498 05:27:00 -- env/env.sh@15 -- # uname 00:07:56.498 05:27:00 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:56.498 05:27:00 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:56.498 05:27:00 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:56.498 05:27:00 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:07:56.498 05:27:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:56.498 05:27:00 -- common/autotest_common.sh@10 -- # set +x 00:07:56.498 ************************************ 00:07:56.498 START TEST env_dpdk_post_init 00:07:56.498 ************************************ 00:07:56.498 05:27:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:56.756 EAL: Detected CPU lcores: 10 00:07:56.756 EAL: Detected NUMA nodes: 1 00:07:56.756 EAL: Detected static linkage of DPDK 00:07:56.756 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:56.756 EAL: Selected IOVA mode 'PA' 00:07:56.756 EAL: VFIO support initialized 00:07:56.756 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:56.756 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:07:57.015 Starting DPDK initialization... 00:07:57.015 Starting SPDK post initialization... 00:07:57.015 SPDK NVMe probe 00:07:57.015 Attaching to 0000:00:06.0 00:07:57.015 Attached to 0000:00:06.0 00:07:57.015 Cleaning up... 00:07:57.015 00:07:57.015 real 0m0.283s 00:07:57.015 user 0m0.104s 00:07:57.015 sys 0m0.081s 00:07:57.015 05:27:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.015 ************************************ 00:07:57.015 05:27:00 -- common/autotest_common.sh@10 -- # set +x 00:07:57.015 END TEST env_dpdk_post_init 00:07:57.015 ************************************ 00:07:57.015 05:27:00 -- env/env.sh@26 -- # uname 00:07:57.015 05:27:00 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:57.015 05:27:00 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:57.015 05:27:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:57.015 05:27:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:57.015 05:27:00 -- common/autotest_common.sh@10 -- # set +x 00:07:57.015 ************************************ 00:07:57.015 START TEST env_mem_callbacks 00:07:57.015 ************************************ 00:07:57.015 05:27:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:57.015 EAL: Detected CPU lcores: 10 00:07:57.015 EAL: Detected NUMA nodes: 1 00:07:57.015 EAL: Detected static linkage of DPDK 00:07:57.015 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:57.015 EAL: Selected IOVA mode 'PA' 00:07:57.015 EAL: VFIO support initialized 00:07:57.275 00:07:57.275 00:07:57.275 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.275 http://cunit.sourceforge.net/ 00:07:57.275 00:07:57.275 00:07:57.275 Suite: memory 00:07:57.275 Test: test ... 00:07:57.275 register 0x200000200000 2097152 00:07:57.275 malloc 3145728 00:07:57.275 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:57.275 register 0x200000400000 4194304 00:07:57.275 buf 0x2000004fffc0 len 3145728 PASSED 00:07:57.275 malloc 64 00:07:57.275 buf 0x2000004ffec0 len 64 PASSED 00:07:57.275 malloc 4194304 00:07:57.275 register 0x200000800000 6291456 00:07:57.275 buf 0x2000009fffc0 len 4194304 PASSED 00:07:57.275 free 0x2000004fffc0 3145728 00:07:57.275 free 0x2000004ffec0 64 00:07:57.275 unregister 0x200000400000 4194304 PASSED 00:07:57.275 free 0x2000009fffc0 4194304 00:07:57.275 unregister 0x200000800000 6291456 PASSED 00:07:57.275 malloc 8388608 00:07:57.275 register 0x200000400000 10485760 00:07:57.275 buf 0x2000005fffc0 len 8388608 PASSED 00:07:57.275 free 0x2000005fffc0 8388608 00:07:57.275 unregister 0x200000400000 10485760 PASSED 00:07:57.275 passed 00:07:57.275 00:07:57.275 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.275 suites 1 1 n/a 0 0 00:07:57.275 tests 1 1 1 0 0 00:07:57.275 asserts 15 15 15 0 n/a 00:07:57.275 00:07:57.275 Elapsed time = 0.075 seconds 00:07:57.275 ************************************ 00:07:57.275 END TEST env_mem_callbacks 00:07:57.275 ************************************ 00:07:57.275 00:07:57.275 real 0m0.321s 00:07:57.275 user 0m0.132s 00:07:57.275 sys 0m0.087s 00:07:57.275 05:27:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.275 05:27:01 -- common/autotest_common.sh@10 -- # set +x 00:07:57.275 ************************************ 00:07:57.275 END TEST env 00:07:57.275 ************************************ 00:07:57.275 00:07:57.275 real 0m9.331s 00:07:57.275 user 0m7.332s 00:07:57.275 sys 0m1.582s 00:07:57.275 05:27:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.275 05:27:01 -- common/autotest_common.sh@10 -- # set +x 00:07:57.534 05:27:01 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:57.534 05:27:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:57.534 05:27:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:57.534 05:27:01 -- common/autotest_common.sh@10 -- # set +x 00:07:57.534 ************************************ 00:07:57.534 START TEST rpc 00:07:57.534 ************************************ 00:07:57.534 05:27:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:57.534 * Looking for test storage... 00:07:57.534 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:57.534 05:27:01 -- rpc/rpc.sh@65 -- # spdk_pid=103607 00:07:57.534 05:27:01 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:57.534 05:27:01 -- rpc/rpc.sh@67 -- # waitforlisten 103607 00:07:57.534 05:27:01 -- common/autotest_common.sh@819 -- # '[' -z 103607 ']' 00:07:57.534 05:27:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.534 05:27:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:57.534 05:27:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.534 05:27:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:57.534 05:27:01 -- common/autotest_common.sh@10 -- # set +x 00:07:57.534 05:27:01 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:57.534 [2024-10-07 05:27:01.449124] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:57.534 [2024-10-07 05:27:01.449640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103607 ] 00:07:57.792 [2024-10-07 05:27:01.620046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.051 [2024-10-07 05:27:01.894761] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:58.051 [2024-10-07 05:27:01.895044] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:58.051 [2024-10-07 05:27:01.895101] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 103607' to capture a snapshot of events at runtime. 00:07:58.051 [2024-10-07 05:27:01.895127] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid103607 for offline analysis/debug. 00:07:58.051 [2024-10-07 05:27:01.895223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.428 05:27:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:59.428 05:27:03 -- common/autotest_common.sh@852 -- # return 0 00:07:59.428 05:27:03 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:59.428 05:27:03 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:59.428 05:27:03 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:59.428 05:27:03 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:59.428 05:27:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:59.428 05:27:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:59.428 05:27:03 -- common/autotest_common.sh@10 -- # set +x 00:07:59.428 ************************************ 00:07:59.428 START TEST rpc_integrity 00:07:59.428 ************************************ 00:07:59.428 05:27:03 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:07:59.428 05:27:03 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:59.428 05:27:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.428 05:27:03 -- common/autotest_common.sh@10 -- # set +x 00:07:59.428 05:27:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.428 05:27:03 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:59.428 05:27:03 -- rpc/rpc.sh@13 -- # jq length 00:07:59.428 05:27:03 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:59.428 05:27:03 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:59.428 05:27:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.428 05:27:03 -- common/autotest_common.sh@10 -- # set +x 00:07:59.428 05:27:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.428 05:27:03 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:59.428 05:27:03 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:59.428 05:27:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.428 05:27:03 -- common/autotest_common.sh@10 -- # set +x 00:07:59.428 05:27:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.428 05:27:03 -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:59.428 { 00:07:59.428 "name": "Malloc0", 00:07:59.428 "aliases": [ 00:07:59.428 "65d8458f-a320-48cc-b385-80e0763e42bc" 00:07:59.428 ], 00:07:59.428 "product_name": "Malloc disk", 00:07:59.428 "block_size": 512, 00:07:59.428 "num_blocks": 16384, 00:07:59.428 "uuid": "65d8458f-a320-48cc-b385-80e0763e42bc", 00:07:59.428 "assigned_rate_limits": { 00:07:59.428 "rw_ios_per_sec": 0, 00:07:59.428 "rw_mbytes_per_sec": 0, 00:07:59.428 "r_mbytes_per_sec": 0, 00:07:59.428 "w_mbytes_per_sec": 0 00:07:59.428 }, 00:07:59.428 "claimed": false, 00:07:59.428 "zoned": false, 00:07:59.428 "supported_io_types": { 00:07:59.428 "read": true, 00:07:59.428 "write": true, 00:07:59.428 "unmap": true, 00:07:59.428 "write_zeroes": true, 00:07:59.428 "flush": true, 00:07:59.428 "reset": true, 00:07:59.428 "compare": false, 00:07:59.428 "compare_and_write": false, 00:07:59.428 "abort": true, 00:07:59.428 "nvme_admin": false, 00:07:59.428 "nvme_io": false 00:07:59.428 }, 00:07:59.428 "memory_domains": [ 00:07:59.428 { 00:07:59.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.428 "dma_device_type": 2 00:07:59.428 } 00:07:59.428 ], 00:07:59.428 "driver_specific": {} 00:07:59.428 } 00:07:59.428 ]' 00:07:59.428 05:27:03 -- rpc/rpc.sh@17 -- # jq length 00:07:59.428 05:27:03 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:59.428 05:27:03 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:59.428 05:27:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.428 05:27:03 -- common/autotest_common.sh@10 -- # set +x 00:07:59.428 [2024-10-07 05:27:03.364426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:59.428 [2024-10-07 05:27:03.364543] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.428 [2024-10-07 05:27:03.364609] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:07:59.428 [2024-10-07 05:27:03.364647] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.428 [2024-10-07 05:27:03.367380] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.428 [2024-10-07 05:27:03.367467] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:59.428 Passthru0 00:07:59.428 05:27:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.428 05:27:03 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:59.428 05:27:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.428 05:27:03 -- common/autotest_common.sh@10 -- # set +x 00:07:59.428 05:27:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.428 05:27:03 -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:59.428 { 00:07:59.428 "name": "Malloc0", 00:07:59.428 "aliases": [ 00:07:59.428 "65d8458f-a320-48cc-b385-80e0763e42bc" 00:07:59.428 ], 00:07:59.428 "product_name": "Malloc disk", 00:07:59.428 "block_size": 512, 00:07:59.428 "num_blocks": 16384, 00:07:59.428 "uuid": "65d8458f-a320-48cc-b385-80e0763e42bc", 00:07:59.428 "assigned_rate_limits": { 00:07:59.428 "rw_ios_per_sec": 0, 00:07:59.428 "rw_mbytes_per_sec": 0, 00:07:59.428 "r_mbytes_per_sec": 0, 00:07:59.428 "w_mbytes_per_sec": 0 00:07:59.428 }, 00:07:59.428 "claimed": true, 00:07:59.428 "claim_type": "exclusive_write", 00:07:59.428 "zoned": false, 00:07:59.428 "supported_io_types": { 00:07:59.428 "read": true, 00:07:59.428 "write": true, 00:07:59.428 "unmap": true, 00:07:59.428 "write_zeroes": true, 00:07:59.428 "flush": true, 00:07:59.428 "reset": true, 00:07:59.428 "compare": false, 00:07:59.428 "compare_and_write": false, 00:07:59.428 "abort": true, 00:07:59.428 "nvme_admin": false, 00:07:59.428 "nvme_io": false 00:07:59.428 }, 00:07:59.428 "memory_domains": [ 00:07:59.428 { 00:07:59.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.428 "dma_device_type": 2 00:07:59.428 } 00:07:59.428 ], 00:07:59.428 "driver_specific": {} 00:07:59.428 }, 00:07:59.428 { 00:07:59.428 "name": "Passthru0", 00:07:59.428 "aliases": [ 00:07:59.428 "4a4df5ed-2f7e-5bde-affb-44c5cb6760e4" 00:07:59.428 ], 00:07:59.428 "product_name": "passthru", 00:07:59.428 "block_size": 512, 00:07:59.428 "num_blocks": 16384, 00:07:59.428 "uuid": "4a4df5ed-2f7e-5bde-affb-44c5cb6760e4", 00:07:59.428 "assigned_rate_limits": { 00:07:59.428 "rw_ios_per_sec": 0, 00:07:59.428 "rw_mbytes_per_sec": 0, 00:07:59.428 "r_mbytes_per_sec": 0, 00:07:59.428 "w_mbytes_per_sec": 0 00:07:59.428 }, 00:07:59.428 "claimed": false, 00:07:59.428 "zoned": false, 00:07:59.428 "supported_io_types": { 00:07:59.429 "read": true, 00:07:59.429 "write": true, 00:07:59.429 "unmap": true, 00:07:59.429 "write_zeroes": true, 00:07:59.429 "flush": true, 00:07:59.429 "reset": true, 00:07:59.429 "compare": false, 00:07:59.429 "compare_and_write": false, 00:07:59.429 "abort": true, 00:07:59.429 "nvme_admin": false, 00:07:59.429 "nvme_io": false 00:07:59.429 }, 00:07:59.429 "memory_domains": [ 00:07:59.429 { 00:07:59.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.429 "dma_device_type": 2 00:07:59.429 } 00:07:59.429 ], 00:07:59.429 "driver_specific": { 00:07:59.429 "passthru": { 00:07:59.429 "name": "Passthru0", 00:07:59.429 "base_bdev_name": "Malloc0" 00:07:59.429 } 00:07:59.429 } 00:07:59.429 } 00:07:59.429 ]' 00:07:59.429 05:27:03 -- rpc/rpc.sh@21 -- # jq length 00:07:59.687 05:27:03 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:59.687 05:27:03 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:59.687 05:27:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.687 05:27:03 -- common/autotest_common.sh@10 -- # set +x 00:07:59.687 05:27:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.687 05:27:03 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:59.687 05:27:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.687 05:27:03 -- common/autotest_common.sh@10 -- # set +x 00:07:59.687 05:27:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.687 05:27:03 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:59.687 05:27:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.687 05:27:03 -- common/autotest_common.sh@10 -- # set +x 00:07:59.687 05:27:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.687 05:27:03 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:59.687 05:27:03 -- rpc/rpc.sh@26 -- # jq length 00:07:59.687 05:27:03 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:59.688 00:07:59.688 real 0m0.333s 00:07:59.688 user 0m0.222s 00:07:59.688 sys 0m0.017s 00:07:59.688 05:27:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.688 05:27:03 -- common/autotest_common.sh@10 -- # set +x 00:07:59.688 ************************************ 00:07:59.688 END TEST rpc_integrity 00:07:59.688 ************************************ 00:07:59.688 05:27:03 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:59.688 05:27:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:59.688 05:27:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:59.688 05:27:03 -- common/autotest_common.sh@10 -- # set +x 00:07:59.688 ************************************ 00:07:59.688 START TEST rpc_plugins 00:07:59.688 ************************************ 00:07:59.688 05:27:03 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:07:59.688 05:27:03 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:59.688 05:27:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.688 05:27:03 -- common/autotest_common.sh@10 -- # set +x 00:07:59.688 05:27:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.688 05:27:03 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:59.688 05:27:03 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:59.688 05:27:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.688 05:27:03 -- common/autotest_common.sh@10 -- # set +x 00:07:59.688 05:27:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.688 05:27:03 -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:59.688 { 00:07:59.688 "name": "Malloc1", 00:07:59.688 "aliases": [ 00:07:59.688 "be99e797-ad57-46f5-bdbd-7fcda186b34f" 00:07:59.688 ], 00:07:59.688 "product_name": "Malloc disk", 00:07:59.688 "block_size": 4096, 00:07:59.688 "num_blocks": 256, 00:07:59.688 "uuid": "be99e797-ad57-46f5-bdbd-7fcda186b34f", 00:07:59.688 "assigned_rate_limits": { 00:07:59.688 "rw_ios_per_sec": 0, 00:07:59.688 "rw_mbytes_per_sec": 0, 00:07:59.688 "r_mbytes_per_sec": 0, 00:07:59.688 "w_mbytes_per_sec": 0 00:07:59.688 }, 00:07:59.688 "claimed": false, 00:07:59.688 "zoned": false, 00:07:59.688 "supported_io_types": { 00:07:59.688 "read": true, 00:07:59.688 "write": true, 00:07:59.688 "unmap": true, 00:07:59.688 "write_zeroes": true, 00:07:59.688 "flush": true, 00:07:59.688 "reset": true, 00:07:59.688 "compare": false, 00:07:59.688 "compare_and_write": false, 00:07:59.688 "abort": true, 00:07:59.688 "nvme_admin": false, 00:07:59.688 "nvme_io": false 00:07:59.688 }, 00:07:59.688 "memory_domains": [ 00:07:59.688 { 00:07:59.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.688 "dma_device_type": 2 00:07:59.688 } 00:07:59.688 ], 00:07:59.688 "driver_specific": {} 00:07:59.688 } 00:07:59.688 ]' 00:07:59.688 05:27:03 -- rpc/rpc.sh@32 -- # jq length 00:07:59.946 05:27:03 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:59.946 05:27:03 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:59.946 05:27:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.946 05:27:03 -- common/autotest_common.sh@10 -- # set +x 00:07:59.946 05:27:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.946 05:27:03 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:59.946 05:27:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.946 05:27:03 -- common/autotest_common.sh@10 -- # set +x 00:07:59.946 05:27:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.946 05:27:03 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:59.946 05:27:03 -- rpc/rpc.sh@36 -- # jq length 00:07:59.946 05:27:03 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:59.946 00:07:59.946 real 0m0.157s 00:07:59.946 user 0m0.108s 00:07:59.946 sys 0m0.011s 00:07:59.946 05:27:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.946 05:27:03 -- common/autotest_common.sh@10 -- # set +x 00:07:59.946 ************************************ 00:07:59.946 END TEST rpc_plugins 00:07:59.946 ************************************ 00:07:59.946 05:27:03 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:59.946 05:27:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:59.946 05:27:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:59.946 05:27:03 -- common/autotest_common.sh@10 -- # set +x 00:07:59.946 ************************************ 00:07:59.946 START TEST rpc_trace_cmd_test 00:07:59.946 ************************************ 00:07:59.946 05:27:03 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:07:59.946 05:27:03 -- rpc/rpc.sh@40 -- # local info 00:07:59.947 05:27:03 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:59.947 05:27:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.947 05:27:03 -- common/autotest_common.sh@10 -- # set +x 00:07:59.947 05:27:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.947 05:27:03 -- rpc/rpc.sh@42 -- # info='{ 00:07:59.947 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid103607", 00:07:59.947 "tpoint_group_mask": "0x8", 00:07:59.947 "iscsi_conn": { 00:07:59.947 "mask": "0x2", 00:07:59.947 "tpoint_mask": "0x0" 00:07:59.947 }, 00:07:59.947 "scsi": { 00:07:59.947 "mask": "0x4", 00:07:59.947 "tpoint_mask": "0x0" 00:07:59.947 }, 00:07:59.947 "bdev": { 00:07:59.947 "mask": "0x8", 00:07:59.947 "tpoint_mask": "0xffffffffffffffff" 00:07:59.947 }, 00:07:59.947 "nvmf_rdma": { 00:07:59.947 "mask": "0x10", 00:07:59.947 "tpoint_mask": "0x0" 00:07:59.947 }, 00:07:59.947 "nvmf_tcp": { 00:07:59.947 "mask": "0x20", 00:07:59.947 "tpoint_mask": "0x0" 00:07:59.947 }, 00:07:59.947 "ftl": { 00:07:59.947 "mask": "0x40", 00:07:59.947 "tpoint_mask": "0x0" 00:07:59.947 }, 00:07:59.947 "blobfs": { 00:07:59.947 "mask": "0x80", 00:07:59.947 "tpoint_mask": "0x0" 00:07:59.947 }, 00:07:59.947 "dsa": { 00:07:59.947 "mask": "0x200", 00:07:59.947 "tpoint_mask": "0x0" 00:07:59.947 }, 00:07:59.947 "thread": { 00:07:59.947 "mask": "0x400", 00:07:59.947 "tpoint_mask": "0x0" 00:07:59.947 }, 00:07:59.947 "nvme_pcie": { 00:07:59.947 "mask": "0x800", 00:07:59.947 "tpoint_mask": "0x0" 00:07:59.947 }, 00:07:59.947 "iaa": { 00:07:59.947 "mask": "0x1000", 00:07:59.947 "tpoint_mask": "0x0" 00:07:59.947 }, 00:07:59.947 "nvme_tcp": { 00:07:59.947 "mask": "0x2000", 00:07:59.947 "tpoint_mask": "0x0" 00:07:59.947 }, 00:07:59.947 "bdev_nvme": { 00:07:59.947 "mask": "0x4000", 00:07:59.947 "tpoint_mask": "0x0" 00:07:59.947 } 00:07:59.947 }' 00:07:59.947 05:27:03 -- rpc/rpc.sh@43 -- # jq length 00:07:59.947 05:27:03 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:07:59.947 05:27:03 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:00.205 05:27:03 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:00.205 05:27:03 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:00.205 05:27:04 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:00.205 05:27:04 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:00.205 05:27:04 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:00.205 05:27:04 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:00.205 05:27:04 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:00.205 00:08:00.205 real 0m0.261s 00:08:00.205 user 0m0.228s 00:08:00.205 sys 0m0.030s 00:08:00.205 05:27:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.205 05:27:04 -- common/autotest_common.sh@10 -- # set +x 00:08:00.205 ************************************ 00:08:00.205 END TEST rpc_trace_cmd_test 00:08:00.205 ************************************ 00:08:00.205 05:27:04 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:00.205 05:27:04 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:00.205 05:27:04 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:00.205 05:27:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:00.205 05:27:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:00.205 05:27:04 -- common/autotest_common.sh@10 -- # set +x 00:08:00.464 ************************************ 00:08:00.464 START TEST rpc_daemon_integrity 00:08:00.464 ************************************ 00:08:00.464 05:27:04 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:08:00.464 05:27:04 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:00.464 05:27:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:00.464 05:27:04 -- common/autotest_common.sh@10 -- # set +x 00:08:00.464 05:27:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:00.464 05:27:04 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:00.464 05:27:04 -- rpc/rpc.sh@13 -- # jq length 00:08:00.464 05:27:04 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:00.464 05:27:04 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:00.464 05:27:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:00.464 05:27:04 -- common/autotest_common.sh@10 -- # set +x 00:08:00.464 05:27:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:00.464 05:27:04 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:00.464 05:27:04 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:00.464 05:27:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:00.464 05:27:04 -- common/autotest_common.sh@10 -- # set +x 00:08:00.464 05:27:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:00.464 05:27:04 -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:00.464 { 00:08:00.464 "name": "Malloc2", 00:08:00.464 "aliases": [ 00:08:00.464 "6fbdd82e-b672-4c13-8bc9-fb439495737e" 00:08:00.464 ], 00:08:00.464 "product_name": "Malloc disk", 00:08:00.464 "block_size": 512, 00:08:00.464 "num_blocks": 16384, 00:08:00.464 "uuid": "6fbdd82e-b672-4c13-8bc9-fb439495737e", 00:08:00.464 "assigned_rate_limits": { 00:08:00.464 "rw_ios_per_sec": 0, 00:08:00.464 "rw_mbytes_per_sec": 0, 00:08:00.464 "r_mbytes_per_sec": 0, 00:08:00.464 "w_mbytes_per_sec": 0 00:08:00.464 }, 00:08:00.464 "claimed": false, 00:08:00.464 "zoned": false, 00:08:00.464 "supported_io_types": { 00:08:00.464 "read": true, 00:08:00.464 "write": true, 00:08:00.464 "unmap": true, 00:08:00.464 "write_zeroes": true, 00:08:00.464 "flush": true, 00:08:00.464 "reset": true, 00:08:00.464 "compare": false, 00:08:00.464 "compare_and_write": false, 00:08:00.464 "abort": true, 00:08:00.464 "nvme_admin": false, 00:08:00.464 "nvme_io": false 00:08:00.464 }, 00:08:00.464 "memory_domains": [ 00:08:00.464 { 00:08:00.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.464 "dma_device_type": 2 00:08:00.464 } 00:08:00.464 ], 00:08:00.464 "driver_specific": {} 00:08:00.464 } 00:08:00.464 ]' 00:08:00.464 05:27:04 -- rpc/rpc.sh@17 -- # jq length 00:08:00.464 05:27:04 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:00.464 05:27:04 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:00.464 05:27:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:00.464 05:27:04 -- common/autotest_common.sh@10 -- # set +x 00:08:00.464 [2024-10-07 05:27:04.346373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:00.464 [2024-10-07 05:27:04.346473] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:00.464 [2024-10-07 05:27:04.346531] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:00.464 [2024-10-07 05:27:04.346559] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:00.464 [2024-10-07 05:27:04.349220] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:00.464 [2024-10-07 05:27:04.349302] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:00.464 Passthru0 00:08:00.464 05:27:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:00.464 05:27:04 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:00.464 05:27:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:00.464 05:27:04 -- common/autotest_common.sh@10 -- # set +x 00:08:00.464 05:27:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:00.464 05:27:04 -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:00.464 { 00:08:00.464 "name": "Malloc2", 00:08:00.464 "aliases": [ 00:08:00.464 "6fbdd82e-b672-4c13-8bc9-fb439495737e" 00:08:00.464 ], 00:08:00.464 "product_name": "Malloc disk", 00:08:00.464 "block_size": 512, 00:08:00.464 "num_blocks": 16384, 00:08:00.464 "uuid": "6fbdd82e-b672-4c13-8bc9-fb439495737e", 00:08:00.464 "assigned_rate_limits": { 00:08:00.464 "rw_ios_per_sec": 0, 00:08:00.464 "rw_mbytes_per_sec": 0, 00:08:00.464 "r_mbytes_per_sec": 0, 00:08:00.464 "w_mbytes_per_sec": 0 00:08:00.464 }, 00:08:00.464 "claimed": true, 00:08:00.464 "claim_type": "exclusive_write", 00:08:00.464 "zoned": false, 00:08:00.464 "supported_io_types": { 00:08:00.464 "read": true, 00:08:00.464 "write": true, 00:08:00.464 "unmap": true, 00:08:00.464 "write_zeroes": true, 00:08:00.464 "flush": true, 00:08:00.464 "reset": true, 00:08:00.464 "compare": false, 00:08:00.464 "compare_and_write": false, 00:08:00.464 "abort": true, 00:08:00.464 "nvme_admin": false, 00:08:00.464 "nvme_io": false 00:08:00.464 }, 00:08:00.464 "memory_domains": [ 00:08:00.464 { 00:08:00.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.464 "dma_device_type": 2 00:08:00.464 } 00:08:00.464 ], 00:08:00.464 "driver_specific": {} 00:08:00.464 }, 00:08:00.464 { 00:08:00.464 "name": "Passthru0", 00:08:00.464 "aliases": [ 00:08:00.464 "a83adb26-af46-50bb-bc7f-1fbc3cde5686" 00:08:00.464 ], 00:08:00.464 "product_name": "passthru", 00:08:00.464 "block_size": 512, 00:08:00.464 "num_blocks": 16384, 00:08:00.464 "uuid": "a83adb26-af46-50bb-bc7f-1fbc3cde5686", 00:08:00.464 "assigned_rate_limits": { 00:08:00.464 "rw_ios_per_sec": 0, 00:08:00.464 "rw_mbytes_per_sec": 0, 00:08:00.464 "r_mbytes_per_sec": 0, 00:08:00.464 "w_mbytes_per_sec": 0 00:08:00.464 }, 00:08:00.464 "claimed": false, 00:08:00.464 "zoned": false, 00:08:00.464 "supported_io_types": { 00:08:00.464 "read": true, 00:08:00.464 "write": true, 00:08:00.464 "unmap": true, 00:08:00.464 "write_zeroes": true, 00:08:00.464 "flush": true, 00:08:00.464 "reset": true, 00:08:00.464 "compare": false, 00:08:00.464 "compare_and_write": false, 00:08:00.464 "abort": true, 00:08:00.464 "nvme_admin": false, 00:08:00.464 "nvme_io": false 00:08:00.464 }, 00:08:00.464 "memory_domains": [ 00:08:00.464 { 00:08:00.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.464 "dma_device_type": 2 00:08:00.464 } 00:08:00.464 ], 00:08:00.464 "driver_specific": { 00:08:00.464 "passthru": { 00:08:00.464 "name": "Passthru0", 00:08:00.464 "base_bdev_name": "Malloc2" 00:08:00.464 } 00:08:00.464 } 00:08:00.464 } 00:08:00.464 ]' 00:08:00.464 05:27:04 -- rpc/rpc.sh@21 -- # jq length 00:08:00.464 05:27:04 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:00.464 05:27:04 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:00.464 05:27:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:00.464 05:27:04 -- common/autotest_common.sh@10 -- # set +x 00:08:00.464 05:27:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:00.464 05:27:04 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:00.465 05:27:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:00.465 05:27:04 -- common/autotest_common.sh@10 -- # set +x 00:08:00.723 05:27:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:00.723 05:27:04 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:00.723 05:27:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:00.723 05:27:04 -- common/autotest_common.sh@10 -- # set +x 00:08:00.723 05:27:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:00.723 05:27:04 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:00.723 05:27:04 -- rpc/rpc.sh@26 -- # jq length 00:08:00.723 05:27:04 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:00.723 00:08:00.723 real 0m0.334s 00:08:00.723 user 0m0.210s 00:08:00.723 sys 0m0.032s 00:08:00.723 05:27:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.723 05:27:04 -- common/autotest_common.sh@10 -- # set +x 00:08:00.723 ************************************ 00:08:00.723 END TEST rpc_daemon_integrity 00:08:00.723 ************************************ 00:08:00.723 05:27:04 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:00.723 05:27:04 -- rpc/rpc.sh@84 -- # killprocess 103607 00:08:00.723 05:27:04 -- common/autotest_common.sh@926 -- # '[' -z 103607 ']' 00:08:00.723 05:27:04 -- common/autotest_common.sh@930 -- # kill -0 103607 00:08:00.723 05:27:04 -- common/autotest_common.sh@931 -- # uname 00:08:00.723 05:27:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:00.723 05:27:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 103607 00:08:00.723 05:27:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:00.723 05:27:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:00.723 killing process with pid 103607 00:08:00.723 05:27:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 103607' 00:08:00.723 05:27:04 -- common/autotest_common.sh@945 -- # kill 103607 00:08:00.723 05:27:04 -- common/autotest_common.sh@950 -- # wait 103607 00:08:03.252 ************************************ 00:08:03.252 END TEST rpc 00:08:03.252 ************************************ 00:08:03.252 00:08:03.252 real 0m5.533s 00:08:03.252 user 0m6.513s 00:08:03.252 sys 0m0.832s 00:08:03.252 05:27:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.252 05:27:06 -- common/autotest_common.sh@10 -- # set +x 00:08:03.252 05:27:06 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:03.252 05:27:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:03.252 05:27:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:03.252 05:27:06 -- common/autotest_common.sh@10 -- # set +x 00:08:03.252 ************************************ 00:08:03.252 START TEST rpc_client 00:08:03.252 ************************************ 00:08:03.252 05:27:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:03.252 * Looking for test storage... 00:08:03.252 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:03.252 05:27:06 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:03.252 OK 00:08:03.252 05:27:06 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:03.252 ************************************ 00:08:03.252 END TEST rpc_client 00:08:03.252 ************************************ 00:08:03.252 00:08:03.252 real 0m0.140s 00:08:03.252 user 0m0.081s 00:08:03.252 sys 0m0.066s 00:08:03.252 05:27:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.252 05:27:07 -- common/autotest_common.sh@10 -- # set +x 00:08:03.252 05:27:07 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:03.252 05:27:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:03.252 05:27:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:03.252 05:27:07 -- common/autotest_common.sh@10 -- # set +x 00:08:03.252 ************************************ 00:08:03.252 START TEST json_config 00:08:03.252 ************************************ 00:08:03.252 05:27:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:03.252 05:27:07 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:03.252 05:27:07 -- nvmf/common.sh@7 -- # uname -s 00:08:03.252 05:27:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:03.252 05:27:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:03.252 05:27:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:03.252 05:27:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:03.252 05:27:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:03.252 05:27:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:03.252 05:27:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:03.252 05:27:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:03.252 05:27:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:03.252 05:27:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:03.252 05:27:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fd03311d-a308-48ab-a33d-d4e36e06c3c4 00:08:03.252 05:27:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=fd03311d-a308-48ab-a33d-d4e36e06c3c4 00:08:03.252 05:27:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:03.252 05:27:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:03.252 05:27:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:03.252 05:27:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:03.252 05:27:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.252 05:27:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.252 05:27:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.252 05:27:07 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:03.252 05:27:07 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:03.252 05:27:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:03.252 05:27:07 -- paths/export.sh@5 -- # export PATH 00:08:03.252 05:27:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:03.252 05:27:07 -- nvmf/common.sh@46 -- # : 0 00:08:03.252 05:27:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:03.252 05:27:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:03.252 05:27:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:03.253 05:27:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:03.253 05:27:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:03.253 05:27:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:03.253 05:27:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:03.253 05:27:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:03.253 05:27:07 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:08:03.253 05:27:07 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:08:03.253 05:27:07 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:08:03.253 05:27:07 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:03.253 05:27:07 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:08:03.253 05:27:07 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:08:03.253 05:27:07 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:08:03.253 05:27:07 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:08:03.253 05:27:07 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:08:03.253 05:27:07 -- json_config/json_config.sh@32 -- # declare -A app_params 00:08:03.253 05:27:07 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:08:03.253 05:27:07 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:08:03.253 05:27:07 -- json_config/json_config.sh@43 -- # last_event_id=0 00:08:03.253 05:27:07 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:03.253 INFO: JSON configuration test init 00:08:03.253 05:27:07 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:08:03.253 05:27:07 -- json_config/json_config.sh@420 -- # json_config_test_init 00:08:03.253 05:27:07 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:08:03.253 05:27:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:03.253 05:27:07 -- common/autotest_common.sh@10 -- # set +x 00:08:03.253 05:27:07 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:08:03.253 05:27:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:03.253 05:27:07 -- common/autotest_common.sh@10 -- # set +x 00:08:03.253 05:27:07 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:08:03.253 05:27:07 -- json_config/json_config.sh@98 -- # local app=target 00:08:03.253 05:27:07 -- json_config/json_config.sh@99 -- # shift 00:08:03.253 05:27:07 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:08:03.253 05:27:07 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:08:03.253 05:27:07 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:08:03.253 05:27:07 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:03.253 05:27:07 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:03.253 05:27:07 -- json_config/json_config.sh@111 -- # app_pid[$app]=103910 00:08:03.253 Waiting for target to run... 00:08:03.253 05:27:07 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:08:03.253 05:27:07 -- json_config/json_config.sh@114 -- # waitforlisten 103910 /var/tmp/spdk_tgt.sock 00:08:03.253 05:27:07 -- common/autotest_common.sh@819 -- # '[' -z 103910 ']' 00:08:03.253 05:27:07 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:03.253 05:27:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:03.253 05:27:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:03.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:03.253 05:27:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:03.253 05:27:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:03.253 05:27:07 -- common/autotest_common.sh@10 -- # set +x 00:08:03.511 [2024-10-07 05:27:07.234475] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:03.511 [2024-10-07 05:27:07.234681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103910 ] 00:08:03.769 [2024-10-07 05:27:07.668963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.028 [2024-10-07 05:27:07.867181] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:04.028 [2024-10-07 05:27:07.867479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.286 05:27:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:04.286 05:27:08 -- common/autotest_common.sh@852 -- # return 0 00:08:04.286 00:08:04.286 05:27:08 -- json_config/json_config.sh@115 -- # echo '' 00:08:04.286 05:27:08 -- json_config/json_config.sh@322 -- # create_accel_config 00:08:04.286 05:27:08 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:08:04.286 05:27:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:04.286 05:27:08 -- common/autotest_common.sh@10 -- # set +x 00:08:04.286 05:27:08 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:08:04.286 05:27:08 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:08:04.286 05:27:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:04.286 05:27:08 -- common/autotest_common.sh@10 -- # set +x 00:08:04.286 05:27:08 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:04.286 05:27:08 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:08:04.286 05:27:08 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:05.224 05:27:09 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:08:05.224 05:27:09 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:08:05.224 05:27:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:05.224 05:27:09 -- common/autotest_common.sh@10 -- # set +x 00:08:05.224 05:27:09 -- json_config/json_config.sh@48 -- # local ret=0 00:08:05.224 05:27:09 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:08:05.224 05:27:09 -- json_config/json_config.sh@49 -- # local enabled_types 00:08:05.224 05:27:09 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:08:05.224 05:27:09 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:08:05.224 05:27:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:05.524 05:27:09 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:08:05.524 05:27:09 -- json_config/json_config.sh@51 -- # local get_types 00:08:05.524 05:27:09 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:08:05.524 05:27:09 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:08:05.524 05:27:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:05.524 05:27:09 -- common/autotest_common.sh@10 -- # set +x 00:08:05.524 05:27:09 -- json_config/json_config.sh@58 -- # return 0 00:08:05.524 05:27:09 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:08:05.524 05:27:09 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:08:05.524 05:27:09 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:08:05.524 05:27:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:05.524 05:27:09 -- common/autotest_common.sh@10 -- # set +x 00:08:05.524 05:27:09 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:08:05.524 05:27:09 -- json_config/json_config.sh@160 -- # local expected_notifications 00:08:05.524 05:27:09 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:08:05.524 05:27:09 -- json_config/json_config.sh@164 -- # get_notifications 00:08:05.524 05:27:09 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:08:05.524 05:27:09 -- json_config/json_config.sh@64 -- # IFS=: 00:08:05.524 05:27:09 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:05.524 05:27:09 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:08:05.524 05:27:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:08:05.524 05:27:09 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:08:05.782 05:27:09 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:08:05.782 05:27:09 -- json_config/json_config.sh@64 -- # IFS=: 00:08:05.782 05:27:09 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:05.782 05:27:09 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:08:05.783 05:27:09 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:08:05.783 05:27:09 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:08:05.783 05:27:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:08:06.041 Nvme0n1p0 Nvme0n1p1 00:08:06.041 05:27:09 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:08:06.041 05:27:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:08:06.299 [2024-10-07 05:27:10.121279] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:06.299 [2024-10-07 05:27:10.121394] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:06.299 00:08:06.299 05:27:10 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:08:06.299 05:27:10 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:08:06.559 Malloc3 00:08:06.559 05:27:10 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:08:06.559 05:27:10 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:08:06.559 [2024-10-07 05:27:10.504249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:06.559 [2024-10-07 05:27:10.504370] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.559 [2024-10-07 05:27:10.504405] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:06.559 [2024-10-07 05:27:10.504435] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.559 [2024-10-07 05:27:10.506624] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.559 [2024-10-07 05:27:10.506703] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:08:06.559 PTBdevFromMalloc3 00:08:06.559 05:27:10 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:08:06.559 05:27:10 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:08:06.818 Null0 00:08:06.818 05:27:10 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:08:06.818 05:27:10 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:08:07.077 Malloc0 00:08:07.077 05:27:10 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:08:07.077 05:27:10 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:08:07.335 Malloc1 00:08:07.335 05:27:11 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:08:07.335 05:27:11 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:08:07.594 102400+0 records in 00:08:07.594 102400+0 records out 00:08:07.594 104857600 bytes (105 MB, 100 MiB) copied, 0.263802 s, 397 MB/s 00:08:07.594 05:27:11 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:08:07.594 05:27:11 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:08:07.852 aio_disk 00:08:07.852 05:27:11 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:08:07.852 05:27:11 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:08:07.852 05:27:11 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:08:08.110 4747dbd9-b780-4699-92e0-2547d6e310b8 00:08:08.110 05:27:11 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:08:08.110 05:27:11 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:08:08.111 05:27:11 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:08:08.368 05:27:12 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:08:08.368 05:27:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:08:08.368 05:27:12 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:08:08.368 05:27:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:08:08.625 05:27:12 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:08:08.625 05:27:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:08:08.883 05:27:12 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:08:08.883 05:27:12 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:08:08.883 05:27:12 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:d5a15df6-cf65-4e81-baaf-d631d7803c1f bdev_register:f0785074-1684-49b9-a608-db2eec3c8386 bdev_register:03f73371-5bdd-4411-8ed5-fbda0bc412f1 bdev_register:1396bd1d-31cc-411e-8685-4d0187ab6876 00:08:08.883 05:27:12 -- json_config/json_config.sh@70 -- # local events_to_check 00:08:08.883 05:27:12 -- json_config/json_config.sh@71 -- # local recorded_events 00:08:08.883 05:27:12 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:08:08.883 05:27:12 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:d5a15df6-cf65-4e81-baaf-d631d7803c1f bdev_register:f0785074-1684-49b9-a608-db2eec3c8386 bdev_register:03f73371-5bdd-4411-8ed5-fbda0bc412f1 bdev_register:1396bd1d-31cc-411e-8685-4d0187ab6876 00:08:08.883 05:27:12 -- json_config/json_config.sh@74 -- # sort 00:08:08.883 05:27:12 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:08:08.883 05:27:12 -- json_config/json_config.sh@75 -- # get_notifications 00:08:08.883 05:27:12 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:08:08.883 05:27:12 -- json_config/json_config.sh@75 -- # sort 00:08:08.883 05:27:12 -- json_config/json_config.sh@64 -- # IFS=: 00:08:08.883 05:27:12 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:08.883 05:27:12 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:08:08.883 05:27:12 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:08:08.883 05:27:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:08:09.141 05:27:13 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:08:09.141 05:27:13 -- json_config/json_config.sh@64 -- # IFS=: 00:08:09.141 05:27:13 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:09.141 05:27:13 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:08:09.141 05:27:13 -- json_config/json_config.sh@64 -- # IFS=: 00:08:09.141 05:27:13 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:09.141 05:27:13 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:08:09.141 05:27:13 -- json_config/json_config.sh@64 -- # IFS=: 00:08:09.141 05:27:13 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:09.141 05:27:13 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:08:09.141 05:27:13 -- json_config/json_config.sh@64 -- # IFS=: 00:08:09.141 05:27:13 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:09.142 05:27:13 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:08:09.142 05:27:13 -- json_config/json_config.sh@64 -- # IFS=: 00:08:09.142 05:27:13 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:09.142 05:27:13 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:08:09.142 05:27:13 -- json_config/json_config.sh@64 -- # IFS=: 00:08:09.142 05:27:13 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:09.142 05:27:13 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:08:09.142 05:27:13 -- json_config/json_config.sh@64 -- # IFS=: 00:08:09.142 05:27:13 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:09.142 05:27:13 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:08:09.142 05:27:13 -- json_config/json_config.sh@64 -- # IFS=: 00:08:09.142 05:27:13 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:09.142 05:27:13 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:08:09.142 05:27:13 -- json_config/json_config.sh@64 -- # IFS=: 00:08:09.142 05:27:13 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:09.142 05:27:13 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:08:09.142 05:27:13 -- json_config/json_config.sh@64 -- # IFS=: 00:08:09.142 05:27:13 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:09.142 05:27:13 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:08:09.142 05:27:13 -- json_config/json_config.sh@64 -- # IFS=: 00:08:09.142 05:27:13 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:09.142 05:27:13 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:08:09.142 05:27:13 -- json_config/json_config.sh@64 -- # IFS=: 00:08:09.142 05:27:13 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:09.142 05:27:13 -- json_config/json_config.sh@65 -- # echo bdev_register:d5a15df6-cf65-4e81-baaf-d631d7803c1f 00:08:09.142 05:27:13 -- json_config/json_config.sh@64 -- # IFS=: 00:08:09.142 05:27:13 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:09.142 05:27:13 -- json_config/json_config.sh@65 -- # echo bdev_register:f0785074-1684-49b9-a608-db2eec3c8386 00:08:09.142 05:27:13 -- json_config/json_config.sh@64 -- # IFS=: 00:08:09.142 05:27:13 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:09.142 05:27:13 -- json_config/json_config.sh@65 -- # echo bdev_register:03f73371-5bdd-4411-8ed5-fbda0bc412f1 00:08:09.142 05:27:13 -- json_config/json_config.sh@64 -- # IFS=: 00:08:09.142 05:27:13 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:09.142 05:27:13 -- json_config/json_config.sh@65 -- # echo bdev_register:1396bd1d-31cc-411e-8685-4d0187ab6876 00:08:09.142 05:27:13 -- json_config/json_config.sh@64 -- # IFS=: 00:08:09.142 05:27:13 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:09.142 05:27:13 -- json_config/json_config.sh@77 -- # [[ bdev_register:03f73371-5bdd-4411-8ed5-fbda0bc412f1 bdev_register:1396bd1d-31cc-411e-8685-4d0187ab6876 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:d5a15df6-cf65-4e81-baaf-d631d7803c1f bdev_register:f0785074-1684-49b9-a608-db2eec3c8386 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\0\3\f\7\3\3\7\1\-\5\b\d\d\-\4\4\1\1\-\8\e\d\5\-\f\b\d\a\0\b\c\4\1\2\f\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\1\3\9\6\b\d\1\d\-\3\1\c\c\-\4\1\1\e\-\8\6\8\5\-\4\d\0\1\8\7\a\b\6\8\7\6\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\d\5\a\1\5\d\f\6\-\c\f\6\5\-\4\e\8\1\-\b\a\a\f\-\d\6\3\1\d\7\8\0\3\c\1\f\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\f\0\7\8\5\0\7\4\-\1\6\8\4\-\4\9\b\9\-\a\6\0\8\-\d\b\2\e\e\c\3\c\8\3\8\6 ]] 00:08:09.142 05:27:13 -- json_config/json_config.sh@89 -- # cat 00:08:09.142 05:27:13 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:03f73371-5bdd-4411-8ed5-fbda0bc412f1 bdev_register:1396bd1d-31cc-411e-8685-4d0187ab6876 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:d5a15df6-cf65-4e81-baaf-d631d7803c1f bdev_register:f0785074-1684-49b9-a608-db2eec3c8386 00:08:09.142 Expected events matched: 00:08:09.142 bdev_register:03f73371-5bdd-4411-8ed5-fbda0bc412f1 00:08:09.142 bdev_register:1396bd1d-31cc-411e-8685-4d0187ab6876 00:08:09.142 bdev_register:Malloc0 00:08:09.142 bdev_register:Malloc0p0 00:08:09.142 bdev_register:Malloc0p1 00:08:09.142 bdev_register:Malloc0p2 00:08:09.142 bdev_register:Malloc1 00:08:09.142 bdev_register:Malloc3 00:08:09.142 bdev_register:Null0 00:08:09.142 bdev_register:Nvme0n1 00:08:09.142 bdev_register:Nvme0n1p0 00:08:09.142 bdev_register:Nvme0n1p1 00:08:09.142 bdev_register:PTBdevFromMalloc3 00:08:09.142 bdev_register:aio_disk 00:08:09.142 bdev_register:d5a15df6-cf65-4e81-baaf-d631d7803c1f 00:08:09.142 bdev_register:f0785074-1684-49b9-a608-db2eec3c8386 00:08:09.142 05:27:13 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:08:09.142 05:27:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:09.142 05:27:13 -- common/autotest_common.sh@10 -- # set +x 00:08:09.142 05:27:13 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:08:09.142 05:27:13 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:08:09.142 05:27:13 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:08:09.142 05:27:13 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:08:09.142 05:27:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:09.142 05:27:13 -- common/autotest_common.sh@10 -- # set +x 00:08:09.400 05:27:13 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:08:09.400 05:27:13 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:09.400 05:27:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:09.659 MallocBdevForConfigChangeCheck 00:08:09.659 05:27:13 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:08:09.659 05:27:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:09.659 05:27:13 -- common/autotest_common.sh@10 -- # set +x 00:08:09.659 05:27:13 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:08:09.659 05:27:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:09.917 INFO: shutting down applications... 00:08:09.917 05:27:13 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:08:09.917 05:27:13 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:08:09.917 05:27:13 -- json_config/json_config.sh@431 -- # json_config_clear target 00:08:09.917 05:27:13 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:08:09.917 05:27:13 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:10.175 [2024-10-07 05:27:14.037362] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:08:10.435 Calling clear_vhost_scsi_subsystem 00:08:10.435 Calling clear_iscsi_subsystem 00:08:10.435 Calling clear_vhost_blk_subsystem 00:08:10.435 Calling clear_nbd_subsystem 00:08:10.435 Calling clear_nvmf_subsystem 00:08:10.435 Calling clear_bdev_subsystem 00:08:10.435 Calling clear_accel_subsystem 00:08:10.435 Calling clear_iobuf_subsystem 00:08:10.435 Calling clear_sock_subsystem 00:08:10.435 Calling clear_vmd_subsystem 00:08:10.435 Calling clear_scheduler_subsystem 00:08:10.435 05:27:14 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:08:10.435 05:27:14 -- json_config/json_config.sh@396 -- # count=100 00:08:10.435 05:27:14 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:08:10.435 05:27:14 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:10.435 05:27:14 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:08:10.435 05:27:14 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:10.694 05:27:14 -- json_config/json_config.sh@398 -- # break 00:08:10.694 05:27:14 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:08:10.694 05:27:14 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:08:10.694 05:27:14 -- json_config/json_config.sh@120 -- # local app=target 00:08:10.694 05:27:14 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:08:10.694 05:27:14 -- json_config/json_config.sh@124 -- # [[ -n 103910 ]] 00:08:10.694 05:27:14 -- json_config/json_config.sh@127 -- # kill -SIGINT 103910 00:08:10.694 05:27:14 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:08:10.694 05:27:14 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:08:10.694 05:27:14 -- json_config/json_config.sh@130 -- # kill -0 103910 00:08:10.694 05:27:14 -- json_config/json_config.sh@134 -- # sleep 0.5 00:08:11.262 05:27:15 -- json_config/json_config.sh@129 -- # (( i++ )) 00:08:11.262 05:27:15 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:08:11.262 05:27:15 -- json_config/json_config.sh@130 -- # kill -0 103910 00:08:11.262 05:27:15 -- json_config/json_config.sh@134 -- # sleep 0.5 00:08:11.829 05:27:15 -- json_config/json_config.sh@129 -- # (( i++ )) 00:08:11.829 05:27:15 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:08:11.829 05:27:15 -- json_config/json_config.sh@130 -- # kill -0 103910 00:08:11.829 05:27:15 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:08:11.829 05:27:15 -- json_config/json_config.sh@132 -- # break 00:08:11.829 05:27:15 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:08:11.829 SPDK target shutdown done 00:08:11.829 05:27:15 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:08:11.829 INFO: relaunching applications... 00:08:11.829 05:27:15 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:08:11.829 05:27:15 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:11.829 05:27:15 -- json_config/json_config.sh@98 -- # local app=target 00:08:11.829 05:27:15 -- json_config/json_config.sh@99 -- # shift 00:08:11.829 05:27:15 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:08:11.829 05:27:15 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:08:11.829 05:27:15 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:08:11.829 05:27:15 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:11.829 05:27:15 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:11.829 05:27:15 -- json_config/json_config.sh@111 -- # app_pid[$app]=104169 00:08:11.829 Waiting for target to run... 00:08:11.829 05:27:15 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:08:11.829 05:27:15 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:11.829 05:27:15 -- json_config/json_config.sh@114 -- # waitforlisten 104169 /var/tmp/spdk_tgt.sock 00:08:11.829 05:27:15 -- common/autotest_common.sh@819 -- # '[' -z 104169 ']' 00:08:11.829 05:27:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:11.829 05:27:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:11.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:11.829 05:27:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:11.829 05:27:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:11.829 05:27:15 -- common/autotest_common.sh@10 -- # set +x 00:08:11.829 [2024-10-07 05:27:15.686368] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:11.829 [2024-10-07 05:27:15.686619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104169 ] 00:08:12.397 [2024-10-07 05:27:16.137378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.397 [2024-10-07 05:27:16.288556] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:12.397 [2024-10-07 05:27:16.288825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.964 [2024-10-07 05:27:16.885076] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:08:12.964 [2024-10-07 05:27:16.885224] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:08:12.964 [2024-10-07 05:27:16.893046] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:12.964 [2024-10-07 05:27:16.893136] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:12.964 [2024-10-07 05:27:16.901066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:12.964 [2024-10-07 05:27:16.901188] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:12.964 [2024-10-07 05:27:16.901227] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:13.223 [2024-10-07 05:27:16.998968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:13.223 [2024-10-07 05:27:16.999098] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.223 [2024-10-07 05:27:16.999156] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:13.223 [2024-10-07 05:27:16.999196] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.223 [2024-10-07 05:27:16.999875] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.223 [2024-10-07 05:27:16.999975] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:08:13.482 05:27:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:13.482 05:27:17 -- common/autotest_common.sh@852 -- # return 0 00:08:13.482 00:08:13.482 05:27:17 -- json_config/json_config.sh@115 -- # echo '' 00:08:13.482 05:27:17 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:08:13.482 INFO: Checking if target configuration is the same... 00:08:13.482 05:27:17 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:13.482 05:27:17 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:13.482 05:27:17 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:08:13.482 05:27:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:13.482 + '[' 2 -ne 2 ']' 00:08:13.482 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:08:13.482 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:08:13.482 + rootdir=/home/vagrant/spdk_repo/spdk 00:08:13.482 +++ basename /dev/fd/62 00:08:13.482 ++ mktemp /tmp/62.XXX 00:08:13.482 + tmp_file_1=/tmp/62.U9l 00:08:13.482 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:13.482 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:13.482 + tmp_file_2=/tmp/spdk_tgt_config.json.GmX 00:08:13.482 + ret=0 00:08:13.482 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:13.740 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:13.740 + diff -u /tmp/62.U9l /tmp/spdk_tgt_config.json.GmX 00:08:13.740 INFO: JSON config files are the same 00:08:13.740 + echo 'INFO: JSON config files are the same' 00:08:13.740 + rm /tmp/62.U9l /tmp/spdk_tgt_config.json.GmX 00:08:13.740 + exit 0 00:08:13.740 05:27:17 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:08:13.740 INFO: changing configuration and checking if this can be detected... 00:08:13.740 05:27:17 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:13.740 05:27:17 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:13.740 05:27:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:13.998 05:27:17 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:13.998 05:27:17 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:08:13.998 05:27:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:13.999 + '[' 2 -ne 2 ']' 00:08:13.999 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:08:13.999 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:08:13.999 + rootdir=/home/vagrant/spdk_repo/spdk 00:08:13.999 +++ basename /dev/fd/62 00:08:13.999 ++ mktemp /tmp/62.XXX 00:08:13.999 + tmp_file_1=/tmp/62.kfR 00:08:13.999 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:13.999 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:13.999 + tmp_file_2=/tmp/spdk_tgt_config.json.Y8h 00:08:13.999 + ret=0 00:08:13.999 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:14.264 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:14.264 + diff -u /tmp/62.kfR /tmp/spdk_tgt_config.json.Y8h 00:08:14.264 + ret=1 00:08:14.264 + echo '=== Start of file: /tmp/62.kfR ===' 00:08:14.264 + cat /tmp/62.kfR 00:08:14.264 + echo '=== End of file: /tmp/62.kfR ===' 00:08:14.264 + echo '' 00:08:14.264 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Y8h ===' 00:08:14.264 + cat /tmp/spdk_tgt_config.json.Y8h 00:08:14.264 + echo '=== End of file: /tmp/spdk_tgt_config.json.Y8h ===' 00:08:14.264 + echo '' 00:08:14.264 + rm /tmp/62.kfR /tmp/spdk_tgt_config.json.Y8h 00:08:14.264 + exit 1 00:08:14.264 INFO: configuration change detected. 00:08:14.264 05:27:18 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:08:14.264 05:27:18 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:08:14.264 05:27:18 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:08:14.264 05:27:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:14.264 05:27:18 -- common/autotest_common.sh@10 -- # set +x 00:08:14.264 05:27:18 -- json_config/json_config.sh@360 -- # local ret=0 00:08:14.264 05:27:18 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:08:14.264 05:27:18 -- json_config/json_config.sh@370 -- # [[ -n 104169 ]] 00:08:14.264 05:27:18 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:08:14.265 05:27:18 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:08:14.265 05:27:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:14.265 05:27:18 -- common/autotest_common.sh@10 -- # set +x 00:08:14.265 05:27:18 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:08:14.265 05:27:18 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:08:14.265 05:27:18 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:08:14.525 05:27:18 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:08:14.525 05:27:18 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:08:14.782 05:27:18 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:08:14.782 05:27:18 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:08:15.041 05:27:18 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:08:15.041 05:27:18 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:08:15.300 05:27:19 -- json_config/json_config.sh@246 -- # uname -s 00:08:15.300 05:27:19 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:08:15.300 05:27:19 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:08:15.300 05:27:19 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:08:15.300 05:27:19 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:08:15.300 05:27:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:15.300 05:27:19 -- common/autotest_common.sh@10 -- # set +x 00:08:15.300 05:27:19 -- json_config/json_config.sh@376 -- # killprocess 104169 00:08:15.300 05:27:19 -- common/autotest_common.sh@926 -- # '[' -z 104169 ']' 00:08:15.300 05:27:19 -- common/autotest_common.sh@930 -- # kill -0 104169 00:08:15.300 05:27:19 -- common/autotest_common.sh@931 -- # uname 00:08:15.300 05:27:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:15.300 05:27:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 104169 00:08:15.300 05:27:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:15.300 05:27:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:15.300 05:27:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 104169' 00:08:15.300 killing process with pid 104169 00:08:15.300 05:27:19 -- common/autotest_common.sh@945 -- # kill 104169 00:08:15.300 05:27:19 -- common/autotest_common.sh@950 -- # wait 104169 00:08:16.709 05:27:20 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:16.709 05:27:20 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:08:16.709 05:27:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:16.709 05:27:20 -- common/autotest_common.sh@10 -- # set +x 00:08:16.709 05:27:20 -- json_config/json_config.sh@381 -- # return 0 00:08:16.709 INFO: Success 00:08:16.709 05:27:20 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:08:16.709 ************************************ 00:08:16.709 END TEST json_config 00:08:16.709 ************************************ 00:08:16.709 00:08:16.709 real 0m13.359s 00:08:16.709 user 0m18.815s 00:08:16.709 sys 0m2.369s 00:08:16.709 05:27:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.709 05:27:20 -- common/autotest_common.sh@10 -- # set +x 00:08:16.709 05:27:20 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:16.709 05:27:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:16.709 05:27:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:16.709 05:27:20 -- common/autotest_common.sh@10 -- # set +x 00:08:16.709 ************************************ 00:08:16.709 START TEST json_config_extra_key 00:08:16.709 ************************************ 00:08:16.709 05:27:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:16.709 05:27:20 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:16.709 05:27:20 -- nvmf/common.sh@7 -- # uname -s 00:08:16.709 05:27:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.709 05:27:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.709 05:27:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.709 05:27:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.709 05:27:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:16.709 05:27:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:16.709 05:27:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.709 05:27:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:16.709 05:27:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.709 05:27:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:16.709 05:27:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f605ddc1-5525-4d16-861d-8aec3de97a09 00:08:16.709 05:27:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=f605ddc1-5525-4d16-861d-8aec3de97a09 00:08:16.709 05:27:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.709 05:27:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:16.709 05:27:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:16.709 05:27:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:16.709 05:27:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.709 05:27:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.709 05:27:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.709 05:27:20 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:16.709 05:27:20 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:16.709 05:27:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:16.709 05:27:20 -- paths/export.sh@5 -- # export PATH 00:08:16.709 05:27:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:16.709 05:27:20 -- nvmf/common.sh@46 -- # : 0 00:08:16.709 05:27:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:16.709 05:27:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:16.709 05:27:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:16.709 05:27:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.709 05:27:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.709 05:27:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:16.709 05:27:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:16.709 05:27:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:16.709 05:27:20 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:08:16.709 05:27:20 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:08:16.709 05:27:20 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:16.709 05:27:20 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:08:16.709 05:27:20 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:16.709 05:27:20 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:08:16.709 05:27:20 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:08:16.709 05:27:20 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:08:16.709 05:27:20 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:16.709 INFO: launching applications... 00:08:16.709 05:27:20 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:08:16.709 05:27:20 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:16.709 05:27:20 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:08:16.709 05:27:20 -- json_config/json_config_extra_key.sh@25 -- # shift 00:08:16.709 05:27:20 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:08:16.709 05:27:20 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:08:16.709 05:27:20 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=104351 00:08:16.709 Waiting for target to run... 00:08:16.709 05:27:20 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:08:16.709 05:27:20 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 104351 /var/tmp/spdk_tgt.sock 00:08:16.709 05:27:20 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:16.709 05:27:20 -- common/autotest_common.sh@819 -- # '[' -z 104351 ']' 00:08:16.709 05:27:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:16.709 05:27:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:16.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:16.709 05:27:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:16.709 05:27:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:16.709 05:27:20 -- common/autotest_common.sh@10 -- # set +x 00:08:16.709 [2024-10-07 05:27:20.671658] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:16.709 [2024-10-07 05:27:20.671851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104351 ] 00:08:17.281 [2024-10-07 05:27:21.125657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.540 [2024-10-07 05:27:21.332335] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:17.540 [2024-10-07 05:27:21.332662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.478 05:27:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:18.478 05:27:22 -- common/autotest_common.sh@852 -- # return 0 00:08:18.478 00:08:18.478 05:27:22 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:08:18.478 INFO: shutting down applications... 00:08:18.478 05:27:22 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:08:18.478 05:27:22 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:08:18.478 05:27:22 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:08:18.478 05:27:22 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:08:18.478 05:27:22 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 104351 ]] 00:08:18.478 05:27:22 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 104351 00:08:18.478 05:27:22 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:08:18.478 05:27:22 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:18.478 05:27:22 -- json_config/json_config_extra_key.sh@50 -- # kill -0 104351 00:08:18.478 05:27:22 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:19.047 05:27:22 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:19.047 05:27:22 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:19.047 05:27:22 -- json_config/json_config_extra_key.sh@50 -- # kill -0 104351 00:08:19.047 05:27:22 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:19.614 05:27:23 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:19.614 05:27:23 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:19.614 05:27:23 -- json_config/json_config_extra_key.sh@50 -- # kill -0 104351 00:08:19.614 05:27:23 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:19.873 05:27:23 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:19.873 05:27:23 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:19.873 05:27:23 -- json_config/json_config_extra_key.sh@50 -- # kill -0 104351 00:08:19.873 05:27:23 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:20.442 05:27:24 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:20.442 05:27:24 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:20.442 05:27:24 -- json_config/json_config_extra_key.sh@50 -- # kill -0 104351 00:08:20.442 05:27:24 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:21.009 05:27:24 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:21.009 05:27:24 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:21.009 05:27:24 -- json_config/json_config_extra_key.sh@50 -- # kill -0 104351 00:08:21.009 05:27:24 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:21.577 05:27:25 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:21.577 05:27:25 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:21.577 05:27:25 -- json_config/json_config_extra_key.sh@50 -- # kill -0 104351 00:08:21.577 05:27:25 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:08:21.577 05:27:25 -- json_config/json_config_extra_key.sh@52 -- # break 00:08:21.577 05:27:25 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:08:21.577 SPDK target shutdown done 00:08:21.577 05:27:25 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:08:21.577 Success 00:08:21.577 05:27:25 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:08:21.577 ************************************ 00:08:21.577 END TEST json_config_extra_key 00:08:21.577 ************************************ 00:08:21.577 00:08:21.577 real 0m4.827s 00:08:21.577 user 0m4.457s 00:08:21.577 sys 0m0.596s 00:08:21.577 05:27:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.577 05:27:25 -- common/autotest_common.sh@10 -- # set +x 00:08:21.577 05:27:25 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:21.577 05:27:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:21.577 05:27:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:21.577 05:27:25 -- common/autotest_common.sh@10 -- # set +x 00:08:21.577 ************************************ 00:08:21.577 START TEST alias_rpc 00:08:21.577 ************************************ 00:08:21.577 05:27:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:21.577 * Looking for test storage... 00:08:21.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:21.577 05:27:25 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:21.577 05:27:25 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=104475 00:08:21.577 05:27:25 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 104475 00:08:21.577 05:27:25 -- common/autotest_common.sh@819 -- # '[' -z 104475 ']' 00:08:21.577 05:27:25 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:21.577 05:27:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.577 05:27:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:21.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.577 05:27:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.577 05:27:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:21.577 05:27:25 -- common/autotest_common.sh@10 -- # set +x 00:08:21.835 [2024-10-07 05:27:25.565076] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:21.835 [2024-10-07 05:27:25.565284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104475 ] 00:08:21.835 [2024-10-07 05:27:25.728759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.093 [2024-10-07 05:27:25.949141] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:22.093 [2024-10-07 05:27:25.949399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.470 05:27:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:23.470 05:27:27 -- common/autotest_common.sh@852 -- # return 0 00:08:23.470 05:27:27 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:23.728 05:27:27 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 104475 00:08:23.728 05:27:27 -- common/autotest_common.sh@926 -- # '[' -z 104475 ']' 00:08:23.728 05:27:27 -- common/autotest_common.sh@930 -- # kill -0 104475 00:08:23.728 05:27:27 -- common/autotest_common.sh@931 -- # uname 00:08:23.728 05:27:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:23.728 05:27:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 104475 00:08:23.728 05:27:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:23.728 05:27:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:23.728 killing process with pid 104475 00:08:23.728 05:27:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 104475' 00:08:23.728 05:27:27 -- common/autotest_common.sh@945 -- # kill 104475 00:08:23.728 05:27:27 -- common/autotest_common.sh@950 -- # wait 104475 00:08:25.630 ************************************ 00:08:25.630 END TEST alias_rpc 00:08:25.630 ************************************ 00:08:25.630 00:08:25.630 real 0m3.856s 00:08:25.630 user 0m4.073s 00:08:25.630 sys 0m0.634s 00:08:25.630 05:27:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.630 05:27:29 -- common/autotest_common.sh@10 -- # set +x 00:08:25.630 05:27:29 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:08:25.630 05:27:29 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:25.630 05:27:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:25.630 05:27:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.630 05:27:29 -- common/autotest_common.sh@10 -- # set +x 00:08:25.630 ************************************ 00:08:25.630 START TEST spdkcli_tcp 00:08:25.630 ************************************ 00:08:25.630 05:27:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:25.630 * Looking for test storage... 00:08:25.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:25.630 05:27:29 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:25.630 05:27:29 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:25.630 05:27:29 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:25.630 05:27:29 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:25.630 05:27:29 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:25.630 05:27:29 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:25.630 05:27:29 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:25.630 05:27:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:25.630 05:27:29 -- common/autotest_common.sh@10 -- # set +x 00:08:25.630 05:27:29 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=104589 00:08:25.630 05:27:29 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:25.630 05:27:29 -- spdkcli/tcp.sh@27 -- # waitforlisten 104589 00:08:25.630 05:27:29 -- common/autotest_common.sh@819 -- # '[' -z 104589 ']' 00:08:25.630 05:27:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.630 05:27:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:25.630 05:27:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.630 05:27:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:25.630 05:27:29 -- common/autotest_common.sh@10 -- # set +x 00:08:25.630 [2024-10-07 05:27:29.481238] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:25.630 [2024-10-07 05:27:29.481434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104589 ] 00:08:25.889 [2024-10-07 05:27:29.651286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:25.889 [2024-10-07 05:27:29.835154] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:25.889 [2024-10-07 05:27:29.835493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.889 [2024-10-07 05:27:29.835503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.267 05:27:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:27.267 05:27:31 -- common/autotest_common.sh@852 -- # return 0 00:08:27.267 05:27:31 -- spdkcli/tcp.sh@31 -- # socat_pid=104616 00:08:27.267 05:27:31 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:27.267 05:27:31 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:27.526 [ 00:08:27.526 "spdk_get_version", 00:08:27.526 "rpc_get_methods", 00:08:27.526 "trace_get_info", 00:08:27.526 "trace_get_tpoint_group_mask", 00:08:27.526 "trace_disable_tpoint_group", 00:08:27.526 "trace_enable_tpoint_group", 00:08:27.526 "trace_clear_tpoint_mask", 00:08:27.526 "trace_set_tpoint_mask", 00:08:27.526 "framework_get_pci_devices", 00:08:27.526 "framework_get_config", 00:08:27.526 "framework_get_subsystems", 00:08:27.526 "iobuf_get_stats", 00:08:27.526 "iobuf_set_options", 00:08:27.526 "sock_set_default_impl", 00:08:27.526 "sock_impl_set_options", 00:08:27.526 "sock_impl_get_options", 00:08:27.526 "vmd_rescan", 00:08:27.526 "vmd_remove_device", 00:08:27.526 "vmd_enable", 00:08:27.526 "accel_get_stats", 00:08:27.526 "accel_set_options", 00:08:27.526 "accel_set_driver", 00:08:27.526 "accel_crypto_key_destroy", 00:08:27.526 "accel_crypto_keys_get", 00:08:27.526 "accel_crypto_key_create", 00:08:27.526 "accel_assign_opc", 00:08:27.526 "accel_get_module_info", 00:08:27.526 "accel_get_opc_assignments", 00:08:27.526 "notify_get_notifications", 00:08:27.526 "notify_get_types", 00:08:27.526 "bdev_get_histogram", 00:08:27.526 "bdev_enable_histogram", 00:08:27.526 "bdev_set_qos_limit", 00:08:27.526 "bdev_set_qd_sampling_period", 00:08:27.526 "bdev_get_bdevs", 00:08:27.526 "bdev_reset_iostat", 00:08:27.526 "bdev_get_iostat", 00:08:27.526 "bdev_examine", 00:08:27.526 "bdev_wait_for_examine", 00:08:27.526 "bdev_set_options", 00:08:27.526 "scsi_get_devices", 00:08:27.526 "thread_set_cpumask", 00:08:27.526 "framework_get_scheduler", 00:08:27.526 "framework_set_scheduler", 00:08:27.526 "framework_get_reactors", 00:08:27.526 "thread_get_io_channels", 00:08:27.526 "thread_get_pollers", 00:08:27.526 "thread_get_stats", 00:08:27.526 "framework_monitor_context_switch", 00:08:27.526 "spdk_kill_instance", 00:08:27.526 "log_enable_timestamps", 00:08:27.526 "log_get_flags", 00:08:27.526 "log_clear_flag", 00:08:27.526 "log_set_flag", 00:08:27.526 "log_get_level", 00:08:27.526 "log_set_level", 00:08:27.526 "log_get_print_level", 00:08:27.526 "log_set_print_level", 00:08:27.526 "framework_enable_cpumask_locks", 00:08:27.526 "framework_disable_cpumask_locks", 00:08:27.526 "framework_wait_init", 00:08:27.526 "framework_start_init", 00:08:27.526 "virtio_blk_create_transport", 00:08:27.526 "virtio_blk_get_transports", 00:08:27.526 "vhost_controller_set_coalescing", 00:08:27.526 "vhost_get_controllers", 00:08:27.526 "vhost_delete_controller", 00:08:27.526 "vhost_create_blk_controller", 00:08:27.526 "vhost_scsi_controller_remove_target", 00:08:27.526 "vhost_scsi_controller_add_target", 00:08:27.526 "vhost_start_scsi_controller", 00:08:27.526 "vhost_create_scsi_controller", 00:08:27.526 "nbd_get_disks", 00:08:27.526 "nbd_stop_disk", 00:08:27.526 "nbd_start_disk", 00:08:27.526 "env_dpdk_get_mem_stats", 00:08:27.526 "nvmf_subsystem_get_listeners", 00:08:27.526 "nvmf_subsystem_get_qpairs", 00:08:27.526 "nvmf_subsystem_get_controllers", 00:08:27.526 "nvmf_get_stats", 00:08:27.526 "nvmf_get_transports", 00:08:27.526 "nvmf_create_transport", 00:08:27.526 "nvmf_get_targets", 00:08:27.526 "nvmf_delete_target", 00:08:27.526 "nvmf_create_target", 00:08:27.526 "nvmf_subsystem_allow_any_host", 00:08:27.526 "nvmf_subsystem_remove_host", 00:08:27.526 "nvmf_subsystem_add_host", 00:08:27.526 "nvmf_subsystem_remove_ns", 00:08:27.526 "nvmf_subsystem_add_ns", 00:08:27.526 "nvmf_subsystem_listener_set_ana_state", 00:08:27.526 "nvmf_discovery_get_referrals", 00:08:27.526 "nvmf_discovery_remove_referral", 00:08:27.526 "nvmf_discovery_add_referral", 00:08:27.526 "nvmf_subsystem_remove_listener", 00:08:27.526 "nvmf_subsystem_add_listener", 00:08:27.527 "nvmf_delete_subsystem", 00:08:27.527 "nvmf_create_subsystem", 00:08:27.527 "nvmf_get_subsystems", 00:08:27.527 "nvmf_set_crdt", 00:08:27.527 "nvmf_set_config", 00:08:27.527 "nvmf_set_max_subsystems", 00:08:27.527 "iscsi_set_options", 00:08:27.527 "iscsi_get_auth_groups", 00:08:27.527 "iscsi_auth_group_remove_secret", 00:08:27.527 "iscsi_auth_group_add_secret", 00:08:27.527 "iscsi_delete_auth_group", 00:08:27.527 "iscsi_create_auth_group", 00:08:27.527 "iscsi_set_discovery_auth", 00:08:27.527 "iscsi_get_options", 00:08:27.527 "iscsi_target_node_request_logout", 00:08:27.527 "iscsi_target_node_set_redirect", 00:08:27.527 "iscsi_target_node_set_auth", 00:08:27.527 "iscsi_target_node_add_lun", 00:08:27.527 "iscsi_get_connections", 00:08:27.527 "iscsi_portal_group_set_auth", 00:08:27.527 "iscsi_start_portal_group", 00:08:27.527 "iscsi_delete_portal_group", 00:08:27.527 "iscsi_create_portal_group", 00:08:27.527 "iscsi_get_portal_groups", 00:08:27.527 "iscsi_delete_target_node", 00:08:27.527 "iscsi_target_node_remove_pg_ig_maps", 00:08:27.527 "iscsi_target_node_add_pg_ig_maps", 00:08:27.527 "iscsi_create_target_node", 00:08:27.527 "iscsi_get_target_nodes", 00:08:27.527 "iscsi_delete_initiator_group", 00:08:27.527 "iscsi_initiator_group_remove_initiators", 00:08:27.527 "iscsi_initiator_group_add_initiators", 00:08:27.527 "iscsi_create_initiator_group", 00:08:27.527 "iscsi_get_initiator_groups", 00:08:27.527 "iaa_scan_accel_module", 00:08:27.527 "dsa_scan_accel_module", 00:08:27.527 "ioat_scan_accel_module", 00:08:27.527 "accel_error_inject_error", 00:08:27.527 "bdev_iscsi_delete", 00:08:27.527 "bdev_iscsi_create", 00:08:27.527 "bdev_iscsi_set_options", 00:08:27.527 "bdev_virtio_attach_controller", 00:08:27.527 "bdev_virtio_scsi_get_devices", 00:08:27.527 "bdev_virtio_detach_controller", 00:08:27.527 "bdev_virtio_blk_set_hotplug", 00:08:27.527 "bdev_ftl_set_property", 00:08:27.527 "bdev_ftl_get_properties", 00:08:27.527 "bdev_ftl_get_stats", 00:08:27.527 "bdev_ftl_unmap", 00:08:27.527 "bdev_ftl_unload", 00:08:27.527 "bdev_ftl_delete", 00:08:27.527 "bdev_ftl_load", 00:08:27.527 "bdev_ftl_create", 00:08:27.527 "bdev_aio_delete", 00:08:27.527 "bdev_aio_rescan", 00:08:27.527 "bdev_aio_create", 00:08:27.527 "blobfs_create", 00:08:27.527 "blobfs_detect", 00:08:27.527 "blobfs_set_cache_size", 00:08:27.527 "bdev_zone_block_delete", 00:08:27.527 "bdev_zone_block_create", 00:08:27.527 "bdev_delay_delete", 00:08:27.527 "bdev_delay_create", 00:08:27.527 "bdev_delay_update_latency", 00:08:27.527 "bdev_split_delete", 00:08:27.527 "bdev_split_create", 00:08:27.527 "bdev_error_inject_error", 00:08:27.527 "bdev_error_delete", 00:08:27.527 "bdev_error_create", 00:08:27.527 "bdev_raid_set_options", 00:08:27.527 "bdev_raid_remove_base_bdev", 00:08:27.527 "bdev_raid_add_base_bdev", 00:08:27.527 "bdev_raid_delete", 00:08:27.527 "bdev_raid_create", 00:08:27.527 "bdev_raid_get_bdevs", 00:08:27.527 "bdev_lvol_grow_lvstore", 00:08:27.527 "bdev_lvol_get_lvols", 00:08:27.527 "bdev_lvol_get_lvstores", 00:08:27.527 "bdev_lvol_delete", 00:08:27.527 "bdev_lvol_set_read_only", 00:08:27.527 "bdev_lvol_resize", 00:08:27.527 "bdev_lvol_decouple_parent", 00:08:27.527 "bdev_lvol_inflate", 00:08:27.527 "bdev_lvol_rename", 00:08:27.527 "bdev_lvol_clone_bdev", 00:08:27.527 "bdev_lvol_clone", 00:08:27.527 "bdev_lvol_snapshot", 00:08:27.527 "bdev_lvol_create", 00:08:27.527 "bdev_lvol_delete_lvstore", 00:08:27.527 "bdev_lvol_rename_lvstore", 00:08:27.527 "bdev_lvol_create_lvstore", 00:08:27.527 "bdev_passthru_delete", 00:08:27.527 "bdev_passthru_create", 00:08:27.527 "bdev_nvme_cuse_unregister", 00:08:27.527 "bdev_nvme_cuse_register", 00:08:27.527 "bdev_opal_new_user", 00:08:27.527 "bdev_opal_set_lock_state", 00:08:27.527 "bdev_opal_delete", 00:08:27.527 "bdev_opal_get_info", 00:08:27.527 "bdev_opal_create", 00:08:27.527 "bdev_nvme_opal_revert", 00:08:27.527 "bdev_nvme_opal_init", 00:08:27.527 "bdev_nvme_send_cmd", 00:08:27.527 "bdev_nvme_get_path_iostat", 00:08:27.527 "bdev_nvme_get_mdns_discovery_info", 00:08:27.527 "bdev_nvme_stop_mdns_discovery", 00:08:27.527 "bdev_nvme_start_mdns_discovery", 00:08:27.527 "bdev_nvme_set_multipath_policy", 00:08:27.527 "bdev_nvme_set_preferred_path", 00:08:27.527 "bdev_nvme_get_io_paths", 00:08:27.527 "bdev_nvme_remove_error_injection", 00:08:27.527 "bdev_nvme_add_error_injection", 00:08:27.527 "bdev_nvme_get_discovery_info", 00:08:27.527 "bdev_nvme_stop_discovery", 00:08:27.527 "bdev_nvme_start_discovery", 00:08:27.527 "bdev_nvme_get_controller_health_info", 00:08:27.527 "bdev_nvme_disable_controller", 00:08:27.527 "bdev_nvme_enable_controller", 00:08:27.527 "bdev_nvme_reset_controller", 00:08:27.527 "bdev_nvme_get_transport_statistics", 00:08:27.527 "bdev_nvme_apply_firmware", 00:08:27.527 "bdev_nvme_detach_controller", 00:08:27.527 "bdev_nvme_get_controllers", 00:08:27.527 "bdev_nvme_attach_controller", 00:08:27.527 "bdev_nvme_set_hotplug", 00:08:27.527 "bdev_nvme_set_options", 00:08:27.527 "bdev_null_resize", 00:08:27.527 "bdev_null_delete", 00:08:27.527 "bdev_null_create", 00:08:27.527 "bdev_malloc_delete", 00:08:27.527 "bdev_malloc_create" 00:08:27.527 ] 00:08:27.527 05:27:31 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:27.527 05:27:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:27.527 05:27:31 -- common/autotest_common.sh@10 -- # set +x 00:08:27.527 05:27:31 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:27.527 05:27:31 -- spdkcli/tcp.sh@38 -- # killprocess 104589 00:08:27.527 05:27:31 -- common/autotest_common.sh@926 -- # '[' -z 104589 ']' 00:08:27.527 05:27:31 -- common/autotest_common.sh@930 -- # kill -0 104589 00:08:27.527 05:27:31 -- common/autotest_common.sh@931 -- # uname 00:08:27.527 05:27:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:27.527 05:27:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 104589 00:08:27.527 05:27:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:27.527 05:27:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:27.527 killing process with pid 104589 00:08:27.527 05:27:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 104589' 00:08:27.527 05:27:31 -- common/autotest_common.sh@945 -- # kill 104589 00:08:27.527 05:27:31 -- common/autotest_common.sh@950 -- # wait 104589 00:08:29.432 ************************************ 00:08:29.432 END TEST spdkcli_tcp 00:08:29.432 ************************************ 00:08:29.432 00:08:29.432 real 0m3.977s 00:08:29.432 user 0m7.515s 00:08:29.432 sys 0m0.522s 00:08:29.432 05:27:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.432 05:27:33 -- common/autotest_common.sh@10 -- # set +x 00:08:29.432 05:27:33 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:29.432 05:27:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:29.432 05:27:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:29.432 05:27:33 -- common/autotest_common.sh@10 -- # set +x 00:08:29.432 ************************************ 00:08:29.432 START TEST dpdk_mem_utility 00:08:29.432 ************************************ 00:08:29.432 05:27:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:29.691 * Looking for test storage... 00:08:29.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:29.691 05:27:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:29.691 05:27:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=104715 00:08:29.691 05:27:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 104715 00:08:29.691 05:27:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:29.691 05:27:33 -- common/autotest_common.sh@819 -- # '[' -z 104715 ']' 00:08:29.691 05:27:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.691 05:27:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:29.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.691 05:27:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.691 05:27:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:29.691 05:27:33 -- common/autotest_common.sh@10 -- # set +x 00:08:29.691 [2024-10-07 05:27:33.544832] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:29.691 [2024-10-07 05:27:33.545608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104715 ] 00:08:29.950 [2024-10-07 05:27:33.715074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.950 [2024-10-07 05:27:33.885300] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:29.950 [2024-10-07 05:27:33.885528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.329 05:27:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:31.329 05:27:35 -- common/autotest_common.sh@852 -- # return 0 00:08:31.329 05:27:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:31.329 05:27:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:31.329 05:27:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.329 05:27:35 -- common/autotest_common.sh@10 -- # set +x 00:08:31.329 { 00:08:31.329 "filename": "/tmp/spdk_mem_dump.txt" 00:08:31.329 } 00:08:31.329 05:27:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.329 05:27:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:31.329 DPDK memory size 820.000000 MiB in 1 heap(s) 00:08:31.329 1 heaps totaling size 820.000000 MiB 00:08:31.329 size: 820.000000 MiB heap id: 0 00:08:31.329 end heaps---------- 00:08:31.329 8 mempools totaling size 598.116089 MiB 00:08:31.329 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:31.329 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:31.329 size: 84.521057 MiB name: bdev_io_104715 00:08:31.329 size: 51.011292 MiB name: evtpool_104715 00:08:31.329 size: 50.003479 MiB name: msgpool_104715 00:08:31.329 size: 21.763794 MiB name: PDU_Pool 00:08:31.329 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:31.329 size: 0.026123 MiB name: Session_Pool 00:08:31.329 end mempools------- 00:08:31.329 6 memzones totaling size 4.142822 MiB 00:08:31.329 size: 1.000366 MiB name: RG_ring_0_104715 00:08:31.329 size: 1.000366 MiB name: RG_ring_1_104715 00:08:31.329 size: 1.000366 MiB name: RG_ring_4_104715 00:08:31.329 size: 1.000366 MiB name: RG_ring_5_104715 00:08:31.329 size: 0.125366 MiB name: RG_ring_2_104715 00:08:31.329 size: 0.015991 MiB name: RG_ring_3_104715 00:08:31.329 end memzones------- 00:08:31.329 05:27:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:31.329 heap id: 0 total size: 820.000000 MiB number of busy elements: 225 number of free elements: 18 00:08:31.329 list of free elements. size: 18.469971 MiB 00:08:31.329 element at address: 0x200000400000 with size: 1.999451 MiB 00:08:31.329 element at address: 0x200000800000 with size: 1.996887 MiB 00:08:31.329 element at address: 0x200007000000 with size: 1.995972 MiB 00:08:31.329 element at address: 0x20000b200000 with size: 1.995972 MiB 00:08:31.329 element at address: 0x200019100040 with size: 0.999939 MiB 00:08:31.329 element at address: 0x200019500040 with size: 0.999939 MiB 00:08:31.329 element at address: 0x200019600000 with size: 0.999329 MiB 00:08:31.329 element at address: 0x200003e00000 with size: 0.996094 MiB 00:08:31.329 element at address: 0x200032200000 with size: 0.994324 MiB 00:08:31.329 element at address: 0x200018e00000 with size: 0.959656 MiB 00:08:31.329 element at address: 0x200019900040 with size: 0.937256 MiB 00:08:31.329 element at address: 0x200000200000 with size: 0.835083 MiB 00:08:31.329 element at address: 0x20001b000000 with size: 0.562195 MiB 00:08:31.329 element at address: 0x200019200000 with size: 0.489197 MiB 00:08:31.329 element at address: 0x200019a00000 with size: 0.485413 MiB 00:08:31.329 element at address: 0x200013800000 with size: 0.468140 MiB 00:08:31.329 element at address: 0x200028400000 with size: 0.398987 MiB 00:08:31.329 element at address: 0x200003a00000 with size: 0.356140 MiB 00:08:31.329 list of standard malloc elements. size: 199.265625 MiB 00:08:31.329 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:08:31.329 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:08:31.329 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:08:31.329 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:08:31.329 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:08:31.329 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:08:31.329 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:08:31.329 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:08:31.329 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:08:31.329 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:08:31.329 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:08:31.329 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:08:31.329 element at address: 0x200003aff980 with size: 0.000244 MiB 00:08:31.329 element at address: 0x200003affa80 with size: 0.000244 MiB 00:08:31.329 element at address: 0x200003eff000 with size: 0.000244 MiB 00:08:31.329 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:08:31.329 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:08:31.329 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:08:31.329 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:08:31.329 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:08:31.329 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:08:31.329 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:08:31.329 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:08:31.329 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:08:31.329 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:08:31.329 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:08:31.329 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:08:31.329 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:08:31.329 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:08:31.330 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:08:31.330 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:08:31.330 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:08:31.330 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:08:31.330 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:08:31.330 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:08:31.330 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:08:31.330 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:08:31.330 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:08:31.330 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:08:31.330 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:08:31.330 element at address: 0x200013877d80 with size: 0.000244 MiB 00:08:31.330 element at address: 0x200013877e80 with size: 0.000244 MiB 00:08:31.330 element at address: 0x200013877f80 with size: 0.000244 MiB 00:08:31.330 element at address: 0x200013878080 with size: 0.000244 MiB 00:08:31.330 element at address: 0x200013878180 with size: 0.000244 MiB 00:08:31.330 element at address: 0x200013878280 with size: 0.000244 MiB 00:08:31.330 element at address: 0x200013878380 with size: 0.000244 MiB 00:08:31.330 element at address: 0x200013878480 with size: 0.000244 MiB 00:08:31.330 element at address: 0x200013878580 with size: 0.000244 MiB 00:08:31.330 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:08:31.330 element at address: 0x200019abc680 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b08fec0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b08ffc0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0900c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0901c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0902c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:08:31.330 element at address: 0x200028466240 with size: 0.000244 MiB 00:08:31.330 element at address: 0x200028466340 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846d000 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846d280 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846d380 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846d480 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846d580 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846d680 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846d780 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846d880 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846d980 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846da80 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846db80 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846de80 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846df80 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846e080 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846e180 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846e280 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846e380 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846e480 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846e580 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846e680 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846e780 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846e880 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846e980 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846f080 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846f180 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846f280 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846f380 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846f480 with size: 0.000244 MiB 00:08:31.330 element at address: 0x20002846f580 with size: 0.000244 MiB 00:08:31.331 element at address: 0x20002846f680 with size: 0.000244 MiB 00:08:31.331 element at address: 0x20002846f780 with size: 0.000244 MiB 00:08:31.331 element at address: 0x20002846f880 with size: 0.000244 MiB 00:08:31.331 element at address: 0x20002846f980 with size: 0.000244 MiB 00:08:31.331 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:08:31.331 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:08:31.331 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:08:31.331 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:08:31.331 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:08:31.331 list of memzone associated elements. size: 602.264404 MiB 00:08:31.331 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:08:31.331 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:31.331 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:08:31.331 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:31.331 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:08:31.331 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_104715_0 00:08:31.331 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:08:31.331 associated memzone info: size: 48.002930 MiB name: MP_evtpool_104715_0 00:08:31.331 element at address: 0x200003fff340 with size: 48.003113 MiB 00:08:31.331 associated memzone info: size: 48.002930 MiB name: MP_msgpool_104715_0 00:08:31.331 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:08:31.331 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:31.331 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:08:31.331 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:31.331 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:08:31.331 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_104715 00:08:31.331 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:08:31.331 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_104715 00:08:31.331 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:08:31.331 associated memzone info: size: 1.007996 MiB name: MP_evtpool_104715 00:08:31.331 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:08:31.331 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:31.331 element at address: 0x200019abc780 with size: 1.008179 MiB 00:08:31.331 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:31.331 element at address: 0x200018efde00 with size: 1.008179 MiB 00:08:31.331 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:31.331 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:08:31.331 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:31.331 element at address: 0x200003eff100 with size: 1.000549 MiB 00:08:31.331 associated memzone info: size: 1.000366 MiB name: RG_ring_0_104715 00:08:31.331 element at address: 0x200003affb80 with size: 1.000549 MiB 00:08:31.331 associated memzone info: size: 1.000366 MiB name: RG_ring_1_104715 00:08:31.331 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:08:31.331 associated memzone info: size: 1.000366 MiB name: RG_ring_4_104715 00:08:31.331 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:08:31.331 associated memzone info: size: 1.000366 MiB name: RG_ring_5_104715 00:08:31.331 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:08:31.331 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_104715 00:08:31.331 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:08:31.331 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:31.331 element at address: 0x200013878680 with size: 0.500549 MiB 00:08:31.331 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:31.331 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:08:31.331 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:31.331 element at address: 0x200003adf740 with size: 0.125549 MiB 00:08:31.331 associated memzone info: size: 0.125366 MiB name: RG_ring_2_104715 00:08:31.331 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:08:31.331 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:31.331 element at address: 0x200028466440 with size: 0.023804 MiB 00:08:31.331 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:31.331 element at address: 0x200003adb500 with size: 0.016174 MiB 00:08:31.331 associated memzone info: size: 0.015991 MiB name: RG_ring_3_104715 00:08:31.331 element at address: 0x20002846c5c0 with size: 0.002502 MiB 00:08:31.331 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:31.331 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:08:31.331 associated memzone info: size: 0.000183 MiB name: MP_msgpool_104715 00:08:31.331 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:08:31.331 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_104715 00:08:31.331 element at address: 0x20002846d100 with size: 0.000366 MiB 00:08:31.331 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:31.331 05:27:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:31.331 05:27:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 104715 00:08:31.331 05:27:35 -- common/autotest_common.sh@926 -- # '[' -z 104715 ']' 00:08:31.331 05:27:35 -- common/autotest_common.sh@930 -- # kill -0 104715 00:08:31.331 05:27:35 -- common/autotest_common.sh@931 -- # uname 00:08:31.331 05:27:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:31.331 05:27:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 104715 00:08:31.331 05:27:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:31.331 05:27:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:31.331 killing process with pid 104715 00:08:31.331 05:27:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 104715' 00:08:31.331 05:27:35 -- common/autotest_common.sh@945 -- # kill 104715 00:08:31.331 05:27:35 -- common/autotest_common.sh@950 -- # wait 104715 00:08:33.235 ************************************ 00:08:33.235 END TEST dpdk_mem_utility 00:08:33.235 ************************************ 00:08:33.235 00:08:33.235 real 0m3.527s 00:08:33.235 user 0m3.654s 00:08:33.235 sys 0m0.458s 00:08:33.235 05:27:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.235 05:27:36 -- common/autotest_common.sh@10 -- # set +x 00:08:33.235 05:27:36 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:33.235 05:27:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:33.235 05:27:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:33.235 05:27:36 -- common/autotest_common.sh@10 -- # set +x 00:08:33.235 ************************************ 00:08:33.235 START TEST event 00:08:33.235 ************************************ 00:08:33.235 05:27:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:33.235 * Looking for test storage... 00:08:33.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:33.235 05:27:37 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:33.235 05:27:37 -- bdev/nbd_common.sh@6 -- # set -e 00:08:33.235 05:27:37 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:33.235 05:27:37 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:33.235 05:27:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:33.235 05:27:37 -- common/autotest_common.sh@10 -- # set +x 00:08:33.235 ************************************ 00:08:33.235 START TEST event_perf 00:08:33.235 ************************************ 00:08:33.235 05:27:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:33.235 Running I/O for 1 seconds...[2024-10-07 05:27:37.161665] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:33.235 [2024-10-07 05:27:37.162103] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104817 ] 00:08:33.494 [2024-10-07 05:27:37.366053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:33.754 [2024-10-07 05:27:37.548101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.754 [2024-10-07 05:27:37.548269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:33.754 [2024-10-07 05:27:37.548271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.754 [2024-10-07 05:27:37.548237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:35.129 Running I/O for 1 seconds... 00:08:35.129 lcore 0: 220043 00:08:35.129 lcore 1: 220036 00:08:35.129 lcore 2: 220038 00:08:35.129 lcore 3: 220040 00:08:35.129 done. 00:08:35.129 ************************************ 00:08:35.129 END TEST event_perf 00:08:35.129 ************************************ 00:08:35.129 00:08:35.129 real 0m1.929s 00:08:35.129 user 0m4.695s 00:08:35.129 sys 0m0.135s 00:08:35.129 05:27:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.129 05:27:39 -- common/autotest_common.sh@10 -- # set +x 00:08:35.129 05:27:39 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:35.129 05:27:39 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:35.129 05:27:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:35.129 05:27:39 -- common/autotest_common.sh@10 -- # set +x 00:08:35.129 ************************************ 00:08:35.129 START TEST event_reactor 00:08:35.129 ************************************ 00:08:35.129 05:27:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:35.402 [2024-10-07 05:27:39.122691] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:35.402 [2024-10-07 05:27:39.123013] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104868 ] 00:08:35.402 [2024-10-07 05:27:39.287686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.674 [2024-10-07 05:27:39.535453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.050 test_start 00:08:37.050 oneshot 00:08:37.050 tick 100 00:08:37.050 tick 100 00:08:37.050 tick 250 00:08:37.050 tick 100 00:08:37.050 tick 100 00:08:37.050 tick 100 00:08:37.050 tick 250 00:08:37.050 tick 500 00:08:37.050 tick 100 00:08:37.050 tick 100 00:08:37.050 tick 250 00:08:37.050 tick 100 00:08:37.050 tick 100 00:08:37.050 test_end 00:08:37.050 ************************************ 00:08:37.050 END TEST event_reactor 00:08:37.050 ************************************ 00:08:37.050 00:08:37.050 real 0m1.898s 00:08:37.050 user 0m1.665s 00:08:37.050 sys 0m0.132s 00:08:37.050 05:27:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.050 05:27:40 -- common/autotest_common.sh@10 -- # set +x 00:08:37.050 05:27:41 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:37.050 05:27:41 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:37.050 05:27:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:37.050 05:27:41 -- common/autotest_common.sh@10 -- # set +x 00:08:37.309 ************************************ 00:08:37.309 START TEST event_reactor_perf 00:08:37.309 ************************************ 00:08:37.309 05:27:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:37.309 [2024-10-07 05:27:41.086319] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:37.309 [2024-10-07 05:27:41.087210] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104918 ] 00:08:37.309 [2024-10-07 05:27:41.258472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.568 [2024-10-07 05:27:41.502497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.945 test_start 00:08:38.945 test_end 00:08:38.945 Performance: 308572 events per second 00:08:38.945 00:08:38.945 real 0m1.801s 00:08:38.945 user 0m1.555s 00:08:38.945 sys 0m0.144s 00:08:38.945 05:27:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.945 05:27:42 -- common/autotest_common.sh@10 -- # set +x 00:08:38.945 ************************************ 00:08:38.945 END TEST event_reactor_perf 00:08:38.945 ************************************ 00:08:38.945 05:27:42 -- event/event.sh@49 -- # uname -s 00:08:38.945 05:27:42 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:38.945 05:27:42 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:38.945 05:27:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:38.945 05:27:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:38.945 05:27:42 -- common/autotest_common.sh@10 -- # set +x 00:08:38.945 ************************************ 00:08:38.945 START TEST event_scheduler 00:08:38.945 ************************************ 00:08:38.945 05:27:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:39.204 * Looking for test storage... 00:08:39.204 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:39.204 05:27:42 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:39.204 05:27:42 -- scheduler/scheduler.sh@35 -- # scheduler_pid=104996 00:08:39.204 05:27:42 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:39.204 05:27:42 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:39.204 05:27:42 -- scheduler/scheduler.sh@37 -- # waitforlisten 104996 00:08:39.204 05:27:42 -- common/autotest_common.sh@819 -- # '[' -z 104996 ']' 00:08:39.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.204 05:27:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.204 05:27:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:39.204 05:27:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.204 05:27:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:39.204 05:27:42 -- common/autotest_common.sh@10 -- # set +x 00:08:39.204 [2024-10-07 05:27:43.092847] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:39.204 [2024-10-07 05:27:43.093220] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104996 ] 00:08:39.463 [2024-10-07 05:27:43.283212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:39.721 [2024-10-07 05:27:43.465953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.721 [2024-10-07 05:27:43.466100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.721 [2024-10-07 05:27:43.466225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.721 [2024-10-07 05:27:43.466228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.289 05:27:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:40.289 05:27:44 -- common/autotest_common.sh@852 -- # return 0 00:08:40.289 05:27:44 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:40.289 05:27:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.289 05:27:44 -- common/autotest_common.sh@10 -- # set +x 00:08:40.289 POWER: Env isn't set yet! 00:08:40.289 POWER: Attempting to initialise ACPI cpufreq power management... 00:08:40.289 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:40.289 POWER: Cannot set governor of lcore 0 to userspace 00:08:40.289 POWER: Attempting to initialise PSTAT power management... 00:08:40.289 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:40.289 POWER: Cannot set governor of lcore 0 to performance 00:08:40.289 POWER: Attempting to initialise AMD PSTATE power management... 00:08:40.289 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:40.289 POWER: Cannot set governor of lcore 0 to userspace 00:08:40.289 POWER: Attempting to initialise CPPC power management... 00:08:40.289 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:40.289 POWER: Cannot set governor of lcore 0 to userspace 00:08:40.289 POWER: Attempting to initialise VM power management... 00:08:40.289 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:40.289 POWER: Unable to set Power Management Environment for lcore 0 00:08:40.289 [2024-10-07 05:27:44.012584] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:08:40.290 [2024-10-07 05:27:44.012624] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:08:40.290 [2024-10-07 05:27:44.012645] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:08:40.290 [2024-10-07 05:27:44.012686] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:40.290 [2024-10-07 05:27:44.012761] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:40.290 [2024-10-07 05:27:44.012797] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:40.290 05:27:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.290 05:27:44 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:40.290 05:27:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.290 05:27:44 -- common/autotest_common.sh@10 -- # set +x 00:08:40.549 [2024-10-07 05:27:44.286612] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:40.549 05:27:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.549 05:27:44 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:40.549 05:27:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:40.549 05:27:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:40.549 05:27:44 -- common/autotest_common.sh@10 -- # set +x 00:08:40.549 ************************************ 00:08:40.549 START TEST scheduler_create_thread 00:08:40.549 ************************************ 00:08:40.549 05:27:44 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:08:40.549 05:27:44 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:40.549 05:27:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.549 05:27:44 -- common/autotest_common.sh@10 -- # set +x 00:08:40.549 2 00:08:40.549 05:27:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.549 05:27:44 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:40.549 05:27:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.549 05:27:44 -- common/autotest_common.sh@10 -- # set +x 00:08:40.549 3 00:08:40.549 05:27:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.549 05:27:44 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:40.549 05:27:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.549 05:27:44 -- common/autotest_common.sh@10 -- # set +x 00:08:40.549 4 00:08:40.549 05:27:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.549 05:27:44 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:40.549 05:27:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.549 05:27:44 -- common/autotest_common.sh@10 -- # set +x 00:08:40.549 5 00:08:40.549 05:27:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.549 05:27:44 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:40.549 05:27:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.549 05:27:44 -- common/autotest_common.sh@10 -- # set +x 00:08:40.549 6 00:08:40.549 05:27:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.549 05:27:44 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:40.549 05:27:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.549 05:27:44 -- common/autotest_common.sh@10 -- # set +x 00:08:40.549 7 00:08:40.549 05:27:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.549 05:27:44 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:40.549 05:27:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.549 05:27:44 -- common/autotest_common.sh@10 -- # set +x 00:08:40.549 8 00:08:40.549 05:27:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.549 05:27:44 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:40.549 05:27:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.549 05:27:44 -- common/autotest_common.sh@10 -- # set +x 00:08:40.549 9 00:08:40.549 05:27:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.549 05:27:44 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:40.549 05:27:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.549 05:27:44 -- common/autotest_common.sh@10 -- # set +x 00:08:40.549 10 00:08:40.549 05:27:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.549 05:27:44 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:40.549 05:27:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.549 05:27:44 -- common/autotest_common.sh@10 -- # set +x 00:08:40.549 05:27:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.549 05:27:44 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:40.549 05:27:44 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:40.549 05:27:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.549 05:27:44 -- common/autotest_common.sh@10 -- # set +x 00:08:40.549 05:27:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.549 05:27:44 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:40.549 05:27:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.549 05:27:44 -- common/autotest_common.sh@10 -- # set +x 00:08:41.485 05:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:41.485 05:27:45 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:41.485 05:27:45 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:41.485 05:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:41.485 05:27:45 -- common/autotest_common.sh@10 -- # set +x 00:08:42.862 05:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:42.862 00:08:42.862 real 0m2.144s 00:08:42.862 user 0m0.022s 00:08:42.862 sys 0m0.000s 00:08:42.862 05:27:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.862 ************************************ 00:08:42.862 END TEST scheduler_create_thread 00:08:42.862 ************************************ 00:08:42.862 05:27:46 -- common/autotest_common.sh@10 -- # set +x 00:08:42.862 05:27:46 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:42.862 05:27:46 -- scheduler/scheduler.sh@46 -- # killprocess 104996 00:08:42.862 05:27:46 -- common/autotest_common.sh@926 -- # '[' -z 104996 ']' 00:08:42.862 05:27:46 -- common/autotest_common.sh@930 -- # kill -0 104996 00:08:42.862 05:27:46 -- common/autotest_common.sh@931 -- # uname 00:08:42.862 05:27:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:42.862 05:27:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 104996 00:08:42.862 05:27:46 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:08:42.862 killing process with pid 104996 00:08:42.862 05:27:46 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:08:42.862 05:27:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 104996' 00:08:42.862 05:27:46 -- common/autotest_common.sh@945 -- # kill 104996 00:08:42.862 05:27:46 -- common/autotest_common.sh@950 -- # wait 104996 00:08:43.121 [2024-10-07 05:27:46.925382] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:44.057 00:08:44.057 real 0m5.052s 00:08:44.057 user 0m8.448s 00:08:44.057 sys 0m0.411s 00:08:44.057 05:27:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.057 05:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:44.057 ************************************ 00:08:44.057 END TEST event_scheduler 00:08:44.057 ************************************ 00:08:44.057 05:27:47 -- event/event.sh@51 -- # modprobe -n nbd 00:08:44.057 05:27:48 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:44.057 05:27:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:44.057 05:27:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:44.057 05:27:48 -- common/autotest_common.sh@10 -- # set +x 00:08:44.057 ************************************ 00:08:44.057 START TEST app_repeat 00:08:44.057 ************************************ 00:08:44.057 05:27:48 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:08:44.057 05:27:48 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.057 05:27:48 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:44.057 05:27:48 -- event/event.sh@13 -- # local nbd_list 00:08:44.057 05:27:48 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:44.057 05:27:48 -- event/event.sh@14 -- # local bdev_list 00:08:44.057 05:27:48 -- event/event.sh@15 -- # local repeat_times=4 00:08:44.057 05:27:48 -- event/event.sh@17 -- # modprobe nbd 00:08:44.057 05:27:48 -- event/event.sh@19 -- # repeat_pid=105119 00:08:44.057 05:27:48 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:44.057 05:27:48 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:44.057 Process app_repeat pid: 105119 00:08:44.057 05:27:48 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 105119' 00:08:44.057 05:27:48 -- event/event.sh@23 -- # for i in {0..2} 00:08:44.057 spdk_app_start Round 0 00:08:44.057 05:27:48 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:44.057 05:27:48 -- event/event.sh@25 -- # waitforlisten 105119 /var/tmp/spdk-nbd.sock 00:08:44.057 05:27:48 -- common/autotest_common.sh@819 -- # '[' -z 105119 ']' 00:08:44.057 05:27:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:44.057 05:27:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:44.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:44.057 05:27:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:44.057 05:27:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:44.057 05:27:48 -- common/autotest_common.sh@10 -- # set +x 00:08:44.316 [2024-10-07 05:27:48.075017] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:44.316 [2024-10-07 05:27:48.075197] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105119 ] 00:08:44.316 [2024-10-07 05:27:48.247462] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:44.574 [2024-10-07 05:27:48.437207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.574 [2024-10-07 05:27:48.437212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.145 05:27:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:45.145 05:27:48 -- common/autotest_common.sh@852 -- # return 0 00:08:45.145 05:27:48 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:45.417 Malloc0 00:08:45.417 05:27:49 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:45.676 Malloc1 00:08:45.676 05:27:49 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:45.676 05:27:49 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:45.676 05:27:49 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:45.676 05:27:49 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:45.676 05:27:49 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:45.676 05:27:49 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:45.676 05:27:49 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:45.676 05:27:49 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:45.676 05:27:49 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:45.676 05:27:49 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:45.676 05:27:49 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:45.676 05:27:49 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:45.676 05:27:49 -- bdev/nbd_common.sh@12 -- # local i 00:08:45.676 05:27:49 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:45.676 05:27:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:45.676 05:27:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:45.935 /dev/nbd0 00:08:45.935 05:27:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:45.935 05:27:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:45.935 05:27:49 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:08:45.935 05:27:49 -- common/autotest_common.sh@857 -- # local i 00:08:45.935 05:27:49 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:45.935 05:27:49 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:45.935 05:27:49 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:08:45.935 05:27:49 -- common/autotest_common.sh@861 -- # break 00:08:45.935 05:27:49 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:45.935 05:27:49 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:45.935 05:27:49 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:45.935 1+0 records in 00:08:45.935 1+0 records out 00:08:45.935 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002314 s, 17.7 MB/s 00:08:45.935 05:27:49 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:45.935 05:27:49 -- common/autotest_common.sh@874 -- # size=4096 00:08:45.935 05:27:49 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:45.935 05:27:49 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:45.935 05:27:49 -- common/autotest_common.sh@877 -- # return 0 00:08:45.935 05:27:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:45.935 05:27:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:45.935 05:27:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:46.194 /dev/nbd1 00:08:46.194 05:27:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:46.194 05:27:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:46.194 05:27:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:08:46.194 05:27:50 -- common/autotest_common.sh@857 -- # local i 00:08:46.194 05:27:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:46.194 05:27:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:46.194 05:27:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:08:46.194 05:27:50 -- common/autotest_common.sh@861 -- # break 00:08:46.194 05:27:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:46.194 05:27:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:46.194 05:27:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:46.194 1+0 records in 00:08:46.194 1+0 records out 00:08:46.194 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567253 s, 7.2 MB/s 00:08:46.194 05:27:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:46.194 05:27:50 -- common/autotest_common.sh@874 -- # size=4096 00:08:46.194 05:27:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:46.194 05:27:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:46.194 05:27:50 -- common/autotest_common.sh@877 -- # return 0 00:08:46.194 05:27:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:46.194 05:27:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:46.194 05:27:50 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:46.194 05:27:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:46.194 05:27:50 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:46.453 05:27:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:46.453 { 00:08:46.453 "nbd_device": "/dev/nbd0", 00:08:46.453 "bdev_name": "Malloc0" 00:08:46.453 }, 00:08:46.453 { 00:08:46.453 "nbd_device": "/dev/nbd1", 00:08:46.453 "bdev_name": "Malloc1" 00:08:46.453 } 00:08:46.453 ]' 00:08:46.453 05:27:50 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:46.453 { 00:08:46.453 "nbd_device": "/dev/nbd0", 00:08:46.453 "bdev_name": "Malloc0" 00:08:46.453 }, 00:08:46.453 { 00:08:46.453 "nbd_device": "/dev/nbd1", 00:08:46.453 "bdev_name": "Malloc1" 00:08:46.453 } 00:08:46.453 ]' 00:08:46.453 05:27:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:46.453 05:27:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:46.453 /dev/nbd1' 00:08:46.453 05:27:50 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:46.453 /dev/nbd1' 00:08:46.453 05:27:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:46.453 05:27:50 -- bdev/nbd_common.sh@65 -- # count=2 00:08:46.453 05:27:50 -- bdev/nbd_common.sh@66 -- # echo 2 00:08:46.453 05:27:50 -- bdev/nbd_common.sh@95 -- # count=2 00:08:46.453 05:27:50 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:46.453 05:27:50 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:46.453 05:27:50 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:46.453 05:27:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:46.453 05:27:50 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:46.453 05:27:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:46.453 05:27:50 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:46.453 05:27:50 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:46.453 256+0 records in 00:08:46.453 256+0 records out 00:08:46.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0069766 s, 150 MB/s 00:08:46.453 05:27:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:46.453 05:27:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:46.712 256+0 records in 00:08:46.712 256+0 records out 00:08:46.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267959 s, 39.1 MB/s 00:08:46.712 05:27:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:46.712 05:27:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:46.712 256+0 records in 00:08:46.712 256+0 records out 00:08:46.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.035081 s, 29.9 MB/s 00:08:46.712 05:27:50 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:46.712 05:27:50 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:46.712 05:27:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:46.712 05:27:50 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:46.712 05:27:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:46.712 05:27:50 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:46.712 05:27:50 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:46.712 05:27:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:46.712 05:27:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:46.712 05:27:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:46.712 05:27:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:46.712 05:27:50 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:46.712 05:27:50 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:46.712 05:27:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:46.712 05:27:50 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:46.712 05:27:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:46.712 05:27:50 -- bdev/nbd_common.sh@51 -- # local i 00:08:46.712 05:27:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:46.712 05:27:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:46.970 05:27:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:46.970 05:27:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:46.970 05:27:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:46.970 05:27:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:46.970 05:27:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:46.970 05:27:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:46.970 05:27:50 -- bdev/nbd_common.sh@41 -- # break 00:08:46.971 05:27:50 -- bdev/nbd_common.sh@45 -- # return 0 00:08:46.971 05:27:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:46.971 05:27:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:47.229 05:27:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:47.229 05:27:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:47.229 05:27:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:47.229 05:27:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:47.229 05:27:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:47.229 05:27:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:47.229 05:27:50 -- bdev/nbd_common.sh@41 -- # break 00:08:47.229 05:27:50 -- bdev/nbd_common.sh@45 -- # return 0 00:08:47.229 05:27:50 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:47.229 05:27:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.229 05:27:50 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:47.229 05:27:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:47.229 05:27:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:47.229 05:27:51 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:47.488 05:27:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:47.488 05:27:51 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:47.488 05:27:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:47.488 05:27:51 -- bdev/nbd_common.sh@65 -- # true 00:08:47.488 05:27:51 -- bdev/nbd_common.sh@65 -- # count=0 00:08:47.488 05:27:51 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:47.488 05:27:51 -- bdev/nbd_common.sh@104 -- # count=0 00:08:47.488 05:27:51 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:47.488 05:27:51 -- bdev/nbd_common.sh@109 -- # return 0 00:08:47.488 05:27:51 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:47.746 05:27:51 -- event/event.sh@35 -- # sleep 3 00:08:49.121 [2024-10-07 05:27:52.729844] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:49.121 [2024-10-07 05:27:52.917963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.121 [2024-10-07 05:27:52.917973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.380 [2024-10-07 05:27:53.107425] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:49.380 [2024-10-07 05:27:53.107578] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:50.756 spdk_app_start Round 1 00:08:50.756 05:27:54 -- event/event.sh@23 -- # for i in {0..2} 00:08:50.756 05:27:54 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:50.756 05:27:54 -- event/event.sh@25 -- # waitforlisten 105119 /var/tmp/spdk-nbd.sock 00:08:50.756 05:27:54 -- common/autotest_common.sh@819 -- # '[' -z 105119 ']' 00:08:50.756 05:27:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:50.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:50.756 05:27:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:50.756 05:27:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:50.756 05:27:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:50.756 05:27:54 -- common/autotest_common.sh@10 -- # set +x 00:08:51.015 05:27:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:51.015 05:27:54 -- common/autotest_common.sh@852 -- # return 0 00:08:51.015 05:27:54 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:51.274 Malloc0 00:08:51.274 05:27:55 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:51.533 Malloc1 00:08:51.533 05:27:55 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:51.533 05:27:55 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.533 05:27:55 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:51.533 05:27:55 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:51.533 05:27:55 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:51.533 05:27:55 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:51.533 05:27:55 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:51.533 05:27:55 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.533 05:27:55 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:51.533 05:27:55 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:51.533 05:27:55 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:51.533 05:27:55 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:51.533 05:27:55 -- bdev/nbd_common.sh@12 -- # local i 00:08:51.533 05:27:55 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:51.533 05:27:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:51.533 05:27:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:51.792 /dev/nbd0 00:08:51.792 05:27:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:51.792 05:27:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:51.792 05:27:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:08:51.792 05:27:55 -- common/autotest_common.sh@857 -- # local i 00:08:51.792 05:27:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:51.792 05:27:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:51.792 05:27:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:08:51.792 05:27:55 -- common/autotest_common.sh@861 -- # break 00:08:51.792 05:27:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:51.792 05:27:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:51.792 05:27:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:51.792 1+0 records in 00:08:51.792 1+0 records out 00:08:51.792 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199244 s, 20.6 MB/s 00:08:51.792 05:27:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:51.792 05:27:55 -- common/autotest_common.sh@874 -- # size=4096 00:08:51.792 05:27:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:51.792 05:27:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:51.792 05:27:55 -- common/autotest_common.sh@877 -- # return 0 00:08:51.792 05:27:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:51.792 05:27:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:51.792 05:27:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:52.050 /dev/nbd1 00:08:52.050 05:27:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:52.050 05:27:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:52.051 05:27:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:08:52.051 05:27:55 -- common/autotest_common.sh@857 -- # local i 00:08:52.051 05:27:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:52.051 05:27:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:52.051 05:27:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:08:52.051 05:27:55 -- common/autotest_common.sh@861 -- # break 00:08:52.051 05:27:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:52.051 05:27:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:52.051 05:27:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:52.051 1+0 records in 00:08:52.051 1+0 records out 00:08:52.051 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426011 s, 9.6 MB/s 00:08:52.051 05:27:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:52.051 05:27:55 -- common/autotest_common.sh@874 -- # size=4096 00:08:52.051 05:27:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:52.051 05:27:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:52.051 05:27:55 -- common/autotest_common.sh@877 -- # return 0 00:08:52.051 05:27:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:52.051 05:27:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:52.051 05:27:55 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:52.051 05:27:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.051 05:27:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:52.309 05:27:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:52.309 { 00:08:52.309 "nbd_device": "/dev/nbd0", 00:08:52.309 "bdev_name": "Malloc0" 00:08:52.309 }, 00:08:52.309 { 00:08:52.309 "nbd_device": "/dev/nbd1", 00:08:52.309 "bdev_name": "Malloc1" 00:08:52.309 } 00:08:52.309 ]' 00:08:52.309 05:27:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:52.309 05:27:56 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:52.309 { 00:08:52.309 "nbd_device": "/dev/nbd0", 00:08:52.309 "bdev_name": "Malloc0" 00:08:52.309 }, 00:08:52.309 { 00:08:52.309 "nbd_device": "/dev/nbd1", 00:08:52.309 "bdev_name": "Malloc1" 00:08:52.309 } 00:08:52.309 ]' 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:52.568 /dev/nbd1' 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:52.568 /dev/nbd1' 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@65 -- # count=2 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@66 -- # echo 2 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@95 -- # count=2 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:52.568 256+0 records in 00:08:52.568 256+0 records out 00:08:52.568 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00432311 s, 243 MB/s 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:52.568 256+0 records in 00:08:52.568 256+0 records out 00:08:52.568 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207459 s, 50.5 MB/s 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:52.568 256+0 records in 00:08:52.568 256+0 records out 00:08:52.568 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0327518 s, 32.0 MB/s 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@51 -- # local i 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.568 05:27:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:52.827 05:27:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:52.827 05:27:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:52.827 05:27:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:52.827 05:27:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.827 05:27:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.827 05:27:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:52.827 05:27:56 -- bdev/nbd_common.sh@41 -- # break 00:08:52.827 05:27:56 -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.827 05:27:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.827 05:27:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:53.086 05:27:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:53.086 05:27:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:53.086 05:27:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:53.086 05:27:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:53.086 05:27:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:53.086 05:27:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:53.086 05:27:56 -- bdev/nbd_common.sh@41 -- # break 00:08:53.086 05:27:56 -- bdev/nbd_common.sh@45 -- # return 0 00:08:53.086 05:27:56 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:53.086 05:27:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:53.086 05:27:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:53.344 05:27:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:53.344 05:27:57 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:53.344 05:27:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:53.344 05:27:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:53.344 05:27:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:53.344 05:27:57 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:53.344 05:27:57 -- bdev/nbd_common.sh@65 -- # true 00:08:53.344 05:27:57 -- bdev/nbd_common.sh@65 -- # count=0 00:08:53.344 05:27:57 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:53.344 05:27:57 -- bdev/nbd_common.sh@104 -- # count=0 00:08:53.344 05:27:57 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:53.344 05:27:57 -- bdev/nbd_common.sh@109 -- # return 0 00:08:53.344 05:27:57 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:53.912 05:27:57 -- event/event.sh@35 -- # sleep 3 00:08:54.848 [2024-10-07 05:27:58.745907] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:55.105 [2024-10-07 05:27:58.922468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.105 [2024-10-07 05:27:58.922469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.363 [2024-10-07 05:27:59.097147] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:55.363 [2024-10-07 05:27:59.097214] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:56.737 spdk_app_start Round 2 00:08:56.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:56.737 05:28:00 -- event/event.sh@23 -- # for i in {0..2} 00:08:56.737 05:28:00 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:56.737 05:28:00 -- event/event.sh@25 -- # waitforlisten 105119 /var/tmp/spdk-nbd.sock 00:08:56.737 05:28:00 -- common/autotest_common.sh@819 -- # '[' -z 105119 ']' 00:08:56.737 05:28:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:56.737 05:28:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:56.737 05:28:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:56.737 05:28:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:56.737 05:28:00 -- common/autotest_common.sh@10 -- # set +x 00:08:56.995 05:28:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:56.995 05:28:00 -- common/autotest_common.sh@852 -- # return 0 00:08:56.995 05:28:00 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:57.254 Malloc0 00:08:57.254 05:28:01 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:57.512 Malloc1 00:08:57.512 05:28:01 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:57.512 05:28:01 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.512 05:28:01 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:57.512 05:28:01 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:57.512 05:28:01 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:57.512 05:28:01 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:57.512 05:28:01 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:57.512 05:28:01 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.512 05:28:01 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:57.512 05:28:01 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:57.512 05:28:01 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:57.512 05:28:01 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:57.512 05:28:01 -- bdev/nbd_common.sh@12 -- # local i 00:08:57.512 05:28:01 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:57.512 05:28:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:57.512 05:28:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:57.771 /dev/nbd0 00:08:57.771 05:28:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:57.771 05:28:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:57.771 05:28:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:08:57.771 05:28:01 -- common/autotest_common.sh@857 -- # local i 00:08:57.771 05:28:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:57.771 05:28:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:57.771 05:28:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:08:57.771 05:28:01 -- common/autotest_common.sh@861 -- # break 00:08:57.771 05:28:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:57.771 05:28:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:57.771 05:28:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:57.771 1+0 records in 00:08:57.771 1+0 records out 00:08:57.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501307 s, 8.2 MB/s 00:08:57.771 05:28:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:57.771 05:28:01 -- common/autotest_common.sh@874 -- # size=4096 00:08:57.771 05:28:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:57.771 05:28:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:57.771 05:28:01 -- common/autotest_common.sh@877 -- # return 0 00:08:57.771 05:28:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:57.771 05:28:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:57.771 05:28:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:58.029 /dev/nbd1 00:08:58.288 05:28:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:58.288 05:28:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:58.288 05:28:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:08:58.288 05:28:02 -- common/autotest_common.sh@857 -- # local i 00:08:58.288 05:28:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:58.288 05:28:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:58.288 05:28:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:08:58.288 05:28:02 -- common/autotest_common.sh@861 -- # break 00:08:58.288 05:28:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:58.288 05:28:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:58.288 05:28:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:58.288 1+0 records in 00:08:58.288 1+0 records out 00:08:58.288 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264113 s, 15.5 MB/s 00:08:58.288 05:28:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:58.288 05:28:02 -- common/autotest_common.sh@874 -- # size=4096 00:08:58.288 05:28:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:58.288 05:28:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:58.288 05:28:02 -- common/autotest_common.sh@877 -- # return 0 00:08:58.288 05:28:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:58.288 05:28:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:58.288 05:28:02 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:58.288 05:28:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.288 05:28:02 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:58.288 05:28:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:58.288 { 00:08:58.288 "nbd_device": "/dev/nbd0", 00:08:58.288 "bdev_name": "Malloc0" 00:08:58.288 }, 00:08:58.288 { 00:08:58.288 "nbd_device": "/dev/nbd1", 00:08:58.288 "bdev_name": "Malloc1" 00:08:58.288 } 00:08:58.288 ]' 00:08:58.288 05:28:02 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:58.288 { 00:08:58.288 "nbd_device": "/dev/nbd0", 00:08:58.288 "bdev_name": "Malloc0" 00:08:58.288 }, 00:08:58.288 { 00:08:58.288 "nbd_device": "/dev/nbd1", 00:08:58.288 "bdev_name": "Malloc1" 00:08:58.288 } 00:08:58.288 ]' 00:08:58.288 05:28:02 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:58.547 /dev/nbd1' 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:58.547 /dev/nbd1' 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@65 -- # count=2 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@66 -- # echo 2 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@95 -- # count=2 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:58.547 256+0 records in 00:08:58.547 256+0 records out 00:08:58.547 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00766159 s, 137 MB/s 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:58.547 256+0 records in 00:08:58.547 256+0 records out 00:08:58.547 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027442 s, 38.2 MB/s 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:58.547 256+0 records in 00:08:58.547 256+0 records out 00:08:58.547 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0444883 s, 23.6 MB/s 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@51 -- # local i 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:58.547 05:28:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:58.807 05:28:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:58.807 05:28:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:58.807 05:28:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:58.807 05:28:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:58.807 05:28:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:58.807 05:28:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:58.807 05:28:02 -- bdev/nbd_common.sh@41 -- # break 00:08:58.807 05:28:02 -- bdev/nbd_common.sh@45 -- # return 0 00:08:58.807 05:28:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:58.807 05:28:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:59.066 05:28:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:59.066 05:28:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:59.066 05:28:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:59.066 05:28:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:59.066 05:28:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.066 05:28:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:59.066 05:28:03 -- bdev/nbd_common.sh@41 -- # break 00:08:59.066 05:28:03 -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.066 05:28:03 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:59.066 05:28:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:59.066 05:28:03 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:59.326 05:28:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:59.326 05:28:03 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:59.326 05:28:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:59.585 05:28:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:59.585 05:28:03 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:59.585 05:28:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:59.585 05:28:03 -- bdev/nbd_common.sh@65 -- # true 00:08:59.585 05:28:03 -- bdev/nbd_common.sh@65 -- # count=0 00:08:59.585 05:28:03 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:59.585 05:28:03 -- bdev/nbd_common.sh@104 -- # count=0 00:08:59.585 05:28:03 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:59.585 05:28:03 -- bdev/nbd_common.sh@109 -- # return 0 00:08:59.585 05:28:03 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:59.843 05:28:03 -- event/event.sh@35 -- # sleep 3 00:09:01.220 [2024-10-07 05:28:04.838644] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:01.220 [2024-10-07 05:28:05.043460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.220 [2024-10-07 05:28:05.043468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.479 [2024-10-07 05:28:05.231026] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:01.479 [2024-10-07 05:28:05.231146] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:02.856 05:28:06 -- event/event.sh@38 -- # waitforlisten 105119 /var/tmp/spdk-nbd.sock 00:09:02.856 05:28:06 -- common/autotest_common.sh@819 -- # '[' -z 105119 ']' 00:09:02.856 05:28:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:02.856 05:28:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:02.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:02.856 05:28:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:02.856 05:28:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:02.856 05:28:06 -- common/autotest_common.sh@10 -- # set +x 00:09:03.115 05:28:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:03.115 05:28:06 -- common/autotest_common.sh@852 -- # return 0 00:09:03.115 05:28:06 -- event/event.sh@39 -- # killprocess 105119 00:09:03.115 05:28:06 -- common/autotest_common.sh@926 -- # '[' -z 105119 ']' 00:09:03.115 05:28:06 -- common/autotest_common.sh@930 -- # kill -0 105119 00:09:03.115 05:28:06 -- common/autotest_common.sh@931 -- # uname 00:09:03.115 05:28:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:03.115 05:28:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105119 00:09:03.115 05:28:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:03.115 05:28:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:03.115 killing process with pid 105119 00:09:03.115 05:28:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105119' 00:09:03.115 05:28:07 -- common/autotest_common.sh@945 -- # kill 105119 00:09:03.115 05:28:07 -- common/autotest_common.sh@950 -- # wait 105119 00:09:04.053 spdk_app_start is called in Round 0. 00:09:04.053 Shutdown signal received, stop current app iteration 00:09:04.053 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:09:04.053 spdk_app_start is called in Round 1. 00:09:04.053 Shutdown signal received, stop current app iteration 00:09:04.053 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:09:04.053 spdk_app_start is called in Round 2. 00:09:04.053 Shutdown signal received, stop current app iteration 00:09:04.053 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:09:04.053 spdk_app_start is called in Round 3. 00:09:04.053 Shutdown signal received, stop current app iteration 00:09:04.053 05:28:07 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:04.053 05:28:07 -- event/event.sh@42 -- # return 0 00:09:04.053 00:09:04.053 real 0m19.975s 00:09:04.053 user 0m42.578s 00:09:04.053 sys 0m2.852s 00:09:04.053 05:28:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.053 05:28:07 -- common/autotest_common.sh@10 -- # set +x 00:09:04.053 ************************************ 00:09:04.053 END TEST app_repeat 00:09:04.053 ************************************ 00:09:04.313 05:28:08 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:04.313 05:28:08 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:04.313 05:28:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:04.313 05:28:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:04.313 05:28:08 -- common/autotest_common.sh@10 -- # set +x 00:09:04.313 ************************************ 00:09:04.313 START TEST cpu_locks 00:09:04.313 ************************************ 00:09:04.313 05:28:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:04.313 * Looking for test storage... 00:09:04.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:04.313 05:28:08 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:04.313 05:28:08 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:04.313 05:28:08 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:04.313 05:28:08 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:04.313 05:28:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:04.313 05:28:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:04.313 05:28:08 -- common/autotest_common.sh@10 -- # set +x 00:09:04.313 ************************************ 00:09:04.313 START TEST default_locks 00:09:04.313 ************************************ 00:09:04.313 05:28:08 -- common/autotest_common.sh@1104 -- # default_locks 00:09:04.313 05:28:08 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=105998 00:09:04.313 05:28:08 -- event/cpu_locks.sh@47 -- # waitforlisten 105998 00:09:04.313 05:28:08 -- common/autotest_common.sh@819 -- # '[' -z 105998 ']' 00:09:04.313 05:28:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.313 05:28:08 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:04.313 05:28:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:04.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.313 05:28:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.313 05:28:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:04.313 05:28:08 -- common/autotest_common.sh@10 -- # set +x 00:09:04.313 [2024-10-07 05:28:08.210708] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:04.313 [2024-10-07 05:28:08.210871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105998 ] 00:09:04.574 [2024-10-07 05:28:08.380422] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.833 [2024-10-07 05:28:08.577085] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:04.833 [2024-10-07 05:28:08.577363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.212 05:28:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:06.212 05:28:09 -- common/autotest_common.sh@852 -- # return 0 00:09:06.212 05:28:09 -- event/cpu_locks.sh@49 -- # locks_exist 105998 00:09:06.212 05:28:09 -- event/cpu_locks.sh@22 -- # lslocks -p 105998 00:09:06.212 05:28:09 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:06.212 05:28:10 -- event/cpu_locks.sh@50 -- # killprocess 105998 00:09:06.212 05:28:10 -- common/autotest_common.sh@926 -- # '[' -z 105998 ']' 00:09:06.212 05:28:10 -- common/autotest_common.sh@930 -- # kill -0 105998 00:09:06.212 05:28:10 -- common/autotest_common.sh@931 -- # uname 00:09:06.212 05:28:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:06.212 05:28:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105998 00:09:06.212 05:28:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:06.212 05:28:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:06.212 killing process with pid 105998 00:09:06.212 05:28:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105998' 00:09:06.212 05:28:10 -- common/autotest_common.sh@945 -- # kill 105998 00:09:06.212 05:28:10 -- common/autotest_common.sh@950 -- # wait 105998 00:09:08.116 05:28:11 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 105998 00:09:08.116 05:28:11 -- common/autotest_common.sh@640 -- # local es=0 00:09:08.116 05:28:11 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 105998 00:09:08.116 05:28:11 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:09:08.116 05:28:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:08.116 05:28:11 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:09:08.116 05:28:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:08.116 05:28:11 -- common/autotest_common.sh@643 -- # waitforlisten 105998 00:09:08.116 05:28:11 -- common/autotest_common.sh@819 -- # '[' -z 105998 ']' 00:09:08.116 05:28:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.116 05:28:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:08.116 05:28:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.116 05:28:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:08.116 05:28:11 -- common/autotest_common.sh@10 -- # set +x 00:09:08.116 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (105998) - No such process 00:09:08.116 ERROR: process (pid: 105998) is no longer running 00:09:08.116 05:28:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:08.116 05:28:11 -- common/autotest_common.sh@852 -- # return 1 00:09:08.116 05:28:11 -- common/autotest_common.sh@643 -- # es=1 00:09:08.116 05:28:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:08.116 05:28:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:08.116 05:28:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:08.116 05:28:11 -- event/cpu_locks.sh@54 -- # no_locks 00:09:08.116 05:28:11 -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:08.116 05:28:11 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:08.116 05:28:11 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:08.116 00:09:08.116 real 0m3.709s 00:09:08.116 user 0m3.821s 00:09:08.116 sys 0m0.654s 00:09:08.116 ************************************ 00:09:08.116 END TEST default_locks 00:09:08.116 ************************************ 00:09:08.116 05:28:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.117 05:28:11 -- common/autotest_common.sh@10 -- # set +x 00:09:08.117 05:28:11 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:08.117 05:28:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:08.117 05:28:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:08.117 05:28:11 -- common/autotest_common.sh@10 -- # set +x 00:09:08.117 ************************************ 00:09:08.117 START TEST default_locks_via_rpc 00:09:08.117 ************************************ 00:09:08.117 05:28:11 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:09:08.117 05:28:11 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=106173 00:09:08.117 05:28:11 -- event/cpu_locks.sh@63 -- # waitforlisten 106173 00:09:08.117 05:28:11 -- common/autotest_common.sh@819 -- # '[' -z 106173 ']' 00:09:08.117 05:28:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.117 05:28:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:08.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.117 05:28:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.117 05:28:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:08.117 05:28:11 -- common/autotest_common.sh@10 -- # set +x 00:09:08.117 05:28:11 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:08.117 [2024-10-07 05:28:11.948312] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:08.117 [2024-10-07 05:28:11.948703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106173 ] 00:09:08.375 [2024-10-07 05:28:12.099223] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.375 [2024-10-07 05:28:12.294329] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:08.375 [2024-10-07 05:28:12.294555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.754 05:28:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:09.754 05:28:13 -- common/autotest_common.sh@852 -- # return 0 00:09:09.754 05:28:13 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:09.754 05:28:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:09.754 05:28:13 -- common/autotest_common.sh@10 -- # set +x 00:09:09.754 05:28:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:09.754 05:28:13 -- event/cpu_locks.sh@67 -- # no_locks 00:09:09.754 05:28:13 -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:09.754 05:28:13 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:09.754 05:28:13 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:09.754 05:28:13 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:09.754 05:28:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:09.754 05:28:13 -- common/autotest_common.sh@10 -- # set +x 00:09:09.754 05:28:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:09.754 05:28:13 -- event/cpu_locks.sh@71 -- # locks_exist 106173 00:09:09.754 05:28:13 -- event/cpu_locks.sh@22 -- # lslocks -p 106173 00:09:09.754 05:28:13 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:10.013 05:28:13 -- event/cpu_locks.sh@73 -- # killprocess 106173 00:09:10.013 05:28:13 -- common/autotest_common.sh@926 -- # '[' -z 106173 ']' 00:09:10.013 05:28:13 -- common/autotest_common.sh@930 -- # kill -0 106173 00:09:10.013 05:28:13 -- common/autotest_common.sh@931 -- # uname 00:09:10.013 05:28:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:10.013 05:28:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106173 00:09:10.013 05:28:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:10.013 killing process with pid 106173 00:09:10.013 05:28:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:10.013 05:28:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106173' 00:09:10.013 05:28:13 -- common/autotest_common.sh@945 -- # kill 106173 00:09:10.013 05:28:13 -- common/autotest_common.sh@950 -- # wait 106173 00:09:11.916 ************************************ 00:09:11.916 END TEST default_locks_via_rpc 00:09:11.916 ************************************ 00:09:11.916 00:09:11.916 real 0m3.815s 00:09:11.916 user 0m4.006s 00:09:11.916 sys 0m0.660s 00:09:11.916 05:28:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.916 05:28:15 -- common/autotest_common.sh@10 -- # set +x 00:09:11.916 05:28:15 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:11.916 05:28:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:11.916 05:28:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:11.916 05:28:15 -- common/autotest_common.sh@10 -- # set +x 00:09:11.916 ************************************ 00:09:11.916 START TEST non_locking_app_on_locked_coremask 00:09:11.916 ************************************ 00:09:11.916 05:28:15 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:09:11.916 05:28:15 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=106381 00:09:11.916 05:28:15 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:11.916 05:28:15 -- event/cpu_locks.sh@81 -- # waitforlisten 106381 /var/tmp/spdk.sock 00:09:11.916 05:28:15 -- common/autotest_common.sh@819 -- # '[' -z 106381 ']' 00:09:11.916 05:28:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.916 05:28:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:11.916 05:28:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.916 05:28:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:11.916 05:28:15 -- common/autotest_common.sh@10 -- # set +x 00:09:11.916 [2024-10-07 05:28:15.826998] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:11.916 [2024-10-07 05:28:15.827194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106381 ] 00:09:12.175 [2024-10-07 05:28:15.986079] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.433 [2024-10-07 05:28:16.169014] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:12.433 [2024-10-07 05:28:16.169253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:13.810 05:28:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:13.810 05:28:17 -- common/autotest_common.sh@852 -- # return 0 00:09:13.810 05:28:17 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=106438 00:09:13.810 05:28:17 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:13.810 05:28:17 -- event/cpu_locks.sh@85 -- # waitforlisten 106438 /var/tmp/spdk2.sock 00:09:13.810 05:28:17 -- common/autotest_common.sh@819 -- # '[' -z 106438 ']' 00:09:13.810 05:28:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:13.810 05:28:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:13.810 05:28:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:13.810 05:28:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:13.810 05:28:17 -- common/autotest_common.sh@10 -- # set +x 00:09:13.810 [2024-10-07 05:28:17.539334] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:13.810 [2024-10-07 05:28:17.540074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106438 ] 00:09:13.810 [2024-10-07 05:28:17.698509] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:13.810 [2024-10-07 05:28:17.698596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.376 [2024-10-07 05:28:18.075597] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:14.376 [2024-10-07 05:28:18.075832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.279 05:28:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:16.279 05:28:19 -- common/autotest_common.sh@852 -- # return 0 00:09:16.279 05:28:19 -- event/cpu_locks.sh@87 -- # locks_exist 106381 00:09:16.279 05:28:19 -- event/cpu_locks.sh@22 -- # lslocks -p 106381 00:09:16.279 05:28:19 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:16.538 05:28:20 -- event/cpu_locks.sh@89 -- # killprocess 106381 00:09:16.538 05:28:20 -- common/autotest_common.sh@926 -- # '[' -z 106381 ']' 00:09:16.538 05:28:20 -- common/autotest_common.sh@930 -- # kill -0 106381 00:09:16.538 05:28:20 -- common/autotest_common.sh@931 -- # uname 00:09:16.538 05:28:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:16.538 05:28:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106381 00:09:16.538 killing process with pid 106381 00:09:16.538 05:28:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:16.538 05:28:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:16.538 05:28:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106381' 00:09:16.538 05:28:20 -- common/autotest_common.sh@945 -- # kill 106381 00:09:16.538 05:28:20 -- common/autotest_common.sh@950 -- # wait 106381 00:09:20.730 05:28:23 -- event/cpu_locks.sh@90 -- # killprocess 106438 00:09:20.730 05:28:23 -- common/autotest_common.sh@926 -- # '[' -z 106438 ']' 00:09:20.730 05:28:23 -- common/autotest_common.sh@930 -- # kill -0 106438 00:09:20.730 05:28:23 -- common/autotest_common.sh@931 -- # uname 00:09:20.730 05:28:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:20.730 05:28:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106438 00:09:20.730 killing process with pid 106438 00:09:20.730 05:28:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:20.730 05:28:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:20.730 05:28:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106438' 00:09:20.730 05:28:24 -- common/autotest_common.sh@945 -- # kill 106438 00:09:20.730 05:28:24 -- common/autotest_common.sh@950 -- # wait 106438 00:09:22.103 ************************************ 00:09:22.103 END TEST non_locking_app_on_locked_coremask 00:09:22.103 ************************************ 00:09:22.103 00:09:22.103 real 0m10.013s 00:09:22.103 user 0m10.835s 00:09:22.103 sys 0m1.299s 00:09:22.103 05:28:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:22.103 05:28:25 -- common/autotest_common.sh@10 -- # set +x 00:09:22.103 05:28:25 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:22.103 05:28:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:22.103 05:28:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:22.103 05:28:25 -- common/autotest_common.sh@10 -- # set +x 00:09:22.103 ************************************ 00:09:22.103 START TEST locking_app_on_unlocked_coremask 00:09:22.103 ************************************ 00:09:22.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.103 05:28:25 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:09:22.103 05:28:25 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=106902 00:09:22.103 05:28:25 -- event/cpu_locks.sh@99 -- # waitforlisten 106902 /var/tmp/spdk.sock 00:09:22.103 05:28:25 -- common/autotest_common.sh@819 -- # '[' -z 106902 ']' 00:09:22.103 05:28:25 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:22.103 05:28:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.103 05:28:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:22.103 05:28:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.103 05:28:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:22.103 05:28:25 -- common/autotest_common.sh@10 -- # set +x 00:09:22.103 [2024-10-07 05:28:25.877739] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:22.103 [2024-10-07 05:28:25.877911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106902 ] 00:09:22.103 [2024-10-07 05:28:26.028695] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:22.103 [2024-10-07 05:28:26.028767] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.361 [2024-10-07 05:28:26.206219] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:22.361 [2024-10-07 05:28:26.206462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:23.734 05:28:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:23.734 05:28:27 -- common/autotest_common.sh@852 -- # return 0 00:09:23.734 05:28:27 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=106958 00:09:23.734 05:28:27 -- event/cpu_locks.sh@103 -- # waitforlisten 106958 /var/tmp/spdk2.sock 00:09:23.734 05:28:27 -- common/autotest_common.sh@819 -- # '[' -z 106958 ']' 00:09:23.734 05:28:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:23.734 05:28:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:23.734 05:28:27 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:23.734 05:28:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:23.734 05:28:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:23.734 05:28:27 -- common/autotest_common.sh@10 -- # set +x 00:09:23.734 [2024-10-07 05:28:27.647504] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:23.734 [2024-10-07 05:28:27.648172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106958 ] 00:09:23.993 [2024-10-07 05:28:27.822160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.252 [2024-10-07 05:28:28.145181] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:24.252 [2024-10-07 05:28:28.145369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.163 05:28:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:26.163 05:28:30 -- common/autotest_common.sh@852 -- # return 0 00:09:26.163 05:28:30 -- event/cpu_locks.sh@105 -- # locks_exist 106958 00:09:26.163 05:28:30 -- event/cpu_locks.sh@22 -- # lslocks -p 106958 00:09:26.163 05:28:30 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:26.730 05:28:30 -- event/cpu_locks.sh@107 -- # killprocess 106902 00:09:26.730 05:28:30 -- common/autotest_common.sh@926 -- # '[' -z 106902 ']' 00:09:26.730 05:28:30 -- common/autotest_common.sh@930 -- # kill -0 106902 00:09:26.730 05:28:30 -- common/autotest_common.sh@931 -- # uname 00:09:26.730 05:28:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:26.730 05:28:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106902 00:09:26.730 05:28:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:26.730 killing process with pid 106902 00:09:26.730 05:28:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:26.730 05:28:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106902' 00:09:26.730 05:28:30 -- common/autotest_common.sh@945 -- # kill 106902 00:09:26.730 05:28:30 -- common/autotest_common.sh@950 -- # wait 106902 00:09:30.017 05:28:33 -- event/cpu_locks.sh@108 -- # killprocess 106958 00:09:30.017 05:28:33 -- common/autotest_common.sh@926 -- # '[' -z 106958 ']' 00:09:30.017 05:28:33 -- common/autotest_common.sh@930 -- # kill -0 106958 00:09:30.017 05:28:33 -- common/autotest_common.sh@931 -- # uname 00:09:30.017 05:28:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:30.017 05:28:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106958 00:09:30.017 05:28:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:30.017 05:28:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:30.017 killing process with pid 106958 00:09:30.017 05:28:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106958' 00:09:30.017 05:28:33 -- common/autotest_common.sh@945 -- # kill 106958 00:09:30.017 05:28:33 -- common/autotest_common.sh@950 -- # wait 106958 00:09:31.922 00:09:31.922 real 0m9.893s 00:09:31.922 user 0m10.795s 00:09:31.922 sys 0m1.257s 00:09:31.922 05:28:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.922 05:28:35 -- common/autotest_common.sh@10 -- # set +x 00:09:31.922 ************************************ 00:09:31.922 END TEST locking_app_on_unlocked_coremask 00:09:31.922 ************************************ 00:09:31.922 05:28:35 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:31.922 05:28:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:31.922 05:28:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:31.922 05:28:35 -- common/autotest_common.sh@10 -- # set +x 00:09:31.922 ************************************ 00:09:31.922 START TEST locking_app_on_locked_coremask 00:09:31.922 ************************************ 00:09:31.922 05:28:35 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:09:31.922 05:28:35 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=107346 00:09:31.922 05:28:35 -- event/cpu_locks.sh@116 -- # waitforlisten 107346 /var/tmp/spdk.sock 00:09:31.922 05:28:35 -- common/autotest_common.sh@819 -- # '[' -z 107346 ']' 00:09:31.922 05:28:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.922 05:28:35 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:31.922 05:28:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:31.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.922 05:28:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.922 05:28:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:31.922 05:28:35 -- common/autotest_common.sh@10 -- # set +x 00:09:31.922 [2024-10-07 05:28:35.853172] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:31.922 [2024-10-07 05:28:35.854375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107346 ] 00:09:32.181 [2024-10-07 05:28:36.023528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.440 [2024-10-07 05:28:36.189592] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:32.440 [2024-10-07 05:28:36.189797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.818 05:28:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:33.818 05:28:37 -- common/autotest_common.sh@852 -- # return 0 00:09:33.818 05:28:37 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=107407 00:09:33.818 05:28:37 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 107407 /var/tmp/spdk2.sock 00:09:33.818 05:28:37 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:33.818 05:28:37 -- common/autotest_common.sh@640 -- # local es=0 00:09:33.818 05:28:37 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 107407 /var/tmp/spdk2.sock 00:09:33.818 05:28:37 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:09:33.818 05:28:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:33.818 05:28:37 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:09:33.818 05:28:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:33.818 05:28:37 -- common/autotest_common.sh@643 -- # waitforlisten 107407 /var/tmp/spdk2.sock 00:09:33.818 05:28:37 -- common/autotest_common.sh@819 -- # '[' -z 107407 ']' 00:09:33.818 05:28:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:33.818 05:28:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:33.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:33.818 05:28:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:33.818 05:28:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:33.818 05:28:37 -- common/autotest_common.sh@10 -- # set +x 00:09:33.818 [2024-10-07 05:28:37.529985] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:33.818 [2024-10-07 05:28:37.530173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107407 ] 00:09:33.818 [2024-10-07 05:28:37.689610] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 107346 has claimed it. 00:09:33.818 [2024-10-07 05:28:37.689683] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:34.387 ERROR: process (pid: 107407) is no longer running 00:09:34.387 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (107407) - No such process 00:09:34.387 05:28:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:34.387 05:28:38 -- common/autotest_common.sh@852 -- # return 1 00:09:34.387 05:28:38 -- common/autotest_common.sh@643 -- # es=1 00:09:34.387 05:28:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:34.387 05:28:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:34.387 05:28:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:34.387 05:28:38 -- event/cpu_locks.sh@122 -- # locks_exist 107346 00:09:34.387 05:28:38 -- event/cpu_locks.sh@22 -- # lslocks -p 107346 00:09:34.387 05:28:38 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:34.387 05:28:38 -- event/cpu_locks.sh@124 -- # killprocess 107346 00:09:34.387 05:28:38 -- common/autotest_common.sh@926 -- # '[' -z 107346 ']' 00:09:34.387 05:28:38 -- common/autotest_common.sh@930 -- # kill -0 107346 00:09:34.387 05:28:38 -- common/autotest_common.sh@931 -- # uname 00:09:34.644 05:28:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:34.644 05:28:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107346 00:09:34.644 05:28:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:34.644 05:28:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:34.644 killing process with pid 107346 00:09:34.644 05:28:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107346' 00:09:34.644 05:28:38 -- common/autotest_common.sh@945 -- # kill 107346 00:09:34.644 05:28:38 -- common/autotest_common.sh@950 -- # wait 107346 00:09:36.549 00:09:36.549 real 0m4.339s 00:09:36.549 user 0m4.688s 00:09:36.549 sys 0m0.744s 00:09:36.549 05:28:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:36.549 05:28:40 -- common/autotest_common.sh@10 -- # set +x 00:09:36.549 ************************************ 00:09:36.549 END TEST locking_app_on_locked_coremask 00:09:36.549 ************************************ 00:09:36.549 05:28:40 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:36.549 05:28:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:36.549 05:28:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:36.549 05:28:40 -- common/autotest_common.sh@10 -- # set +x 00:09:36.549 ************************************ 00:09:36.549 START TEST locking_overlapped_coremask 00:09:36.549 ************************************ 00:09:36.549 05:28:40 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:09:36.549 05:28:40 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=107543 00:09:36.549 05:28:40 -- event/cpu_locks.sh@133 -- # waitforlisten 107543 /var/tmp/spdk.sock 00:09:36.549 05:28:40 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:36.549 05:28:40 -- common/autotest_common.sh@819 -- # '[' -z 107543 ']' 00:09:36.549 05:28:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.549 05:28:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:36.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.549 05:28:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.549 05:28:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:36.549 05:28:40 -- common/autotest_common.sh@10 -- # set +x 00:09:36.549 [2024-10-07 05:28:40.247373] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:36.549 [2024-10-07 05:28:40.248137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107543 ] 00:09:36.549 [2024-10-07 05:28:40.425799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:36.808 [2024-10-07 05:28:40.609205] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:36.808 [2024-10-07 05:28:40.609545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.808 [2024-10-07 05:28:40.609645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.808 [2024-10-07 05:28:40.609643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.190 05:28:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:38.190 05:28:41 -- common/autotest_common.sh@852 -- # return 0 00:09:38.190 05:28:41 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=107691 00:09:38.190 05:28:41 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:38.190 05:28:41 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 107691 /var/tmp/spdk2.sock 00:09:38.190 05:28:41 -- common/autotest_common.sh@640 -- # local es=0 00:09:38.190 05:28:41 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 107691 /var/tmp/spdk2.sock 00:09:38.190 05:28:41 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:09:38.190 05:28:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:38.190 05:28:41 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:09:38.190 05:28:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:38.190 05:28:41 -- common/autotest_common.sh@643 -- # waitforlisten 107691 /var/tmp/spdk2.sock 00:09:38.190 05:28:41 -- common/autotest_common.sh@819 -- # '[' -z 107691 ']' 00:09:38.190 05:28:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:38.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:38.190 05:28:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:38.190 05:28:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:38.190 05:28:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:38.190 05:28:41 -- common/autotest_common.sh@10 -- # set +x 00:09:38.190 [2024-10-07 05:28:42.052825] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:38.190 [2024-10-07 05:28:42.053031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107691 ] 00:09:38.449 [2024-10-07 05:28:42.236290] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 107543 has claimed it. 00:09:38.449 [2024-10-07 05:28:42.236394] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:39.017 ERROR: process (pid: 107691) is no longer running 00:09:39.017 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (107691) - No such process 00:09:39.017 05:28:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:39.017 05:28:42 -- common/autotest_common.sh@852 -- # return 1 00:09:39.017 05:28:42 -- common/autotest_common.sh@643 -- # es=1 00:09:39.017 05:28:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:39.017 05:28:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:39.017 05:28:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:39.017 05:28:42 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:39.017 05:28:42 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:39.017 05:28:42 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:39.017 05:28:42 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:39.017 05:28:42 -- event/cpu_locks.sh@141 -- # killprocess 107543 00:09:39.017 05:28:42 -- common/autotest_common.sh@926 -- # '[' -z 107543 ']' 00:09:39.017 05:28:42 -- common/autotest_common.sh@930 -- # kill -0 107543 00:09:39.017 05:28:42 -- common/autotest_common.sh@931 -- # uname 00:09:39.017 05:28:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:39.017 05:28:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107543 00:09:39.017 05:28:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:39.017 killing process with pid 107543 00:09:39.017 05:28:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:39.017 05:28:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107543' 00:09:39.017 05:28:42 -- common/autotest_common.sh@945 -- # kill 107543 00:09:39.017 05:28:42 -- common/autotest_common.sh@950 -- # wait 107543 00:09:40.921 00:09:40.921 real 0m4.430s 00:09:40.921 user 0m12.228s 00:09:40.921 sys 0m0.622s 00:09:40.921 05:28:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.921 ************************************ 00:09:40.921 05:28:44 -- common/autotest_common.sh@10 -- # set +x 00:09:40.921 END TEST locking_overlapped_coremask 00:09:40.921 ************************************ 00:09:40.921 05:28:44 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:40.922 05:28:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:40.922 05:28:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:40.922 05:28:44 -- common/autotest_common.sh@10 -- # set +x 00:09:40.922 ************************************ 00:09:40.922 START TEST locking_overlapped_coremask_via_rpc 00:09:40.922 ************************************ 00:09:40.922 05:28:44 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:09:40.922 05:28:44 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=107788 00:09:40.922 05:28:44 -- event/cpu_locks.sh@149 -- # waitforlisten 107788 /var/tmp/spdk.sock 00:09:40.922 05:28:44 -- common/autotest_common.sh@819 -- # '[' -z 107788 ']' 00:09:40.922 05:28:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.922 05:28:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:40.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.922 05:28:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.922 05:28:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:40.922 05:28:44 -- common/autotest_common.sh@10 -- # set +x 00:09:40.922 05:28:44 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:40.922 [2024-10-07 05:28:44.722042] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:40.922 [2024-10-07 05:28:44.722434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107788 ] 00:09:41.180 [2024-10-07 05:28:44.899000] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:41.180 [2024-10-07 05:28:44.899080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:41.180 [2024-10-07 05:28:45.065453] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:41.180 [2024-10-07 05:28:45.065768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.180 [2024-10-07 05:28:45.065913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:41.180 [2024-10-07 05:28:45.065925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.562 05:28:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:42.562 05:28:46 -- common/autotest_common.sh@852 -- # return 0 00:09:42.562 05:28:46 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=107883 00:09:42.562 05:28:46 -- event/cpu_locks.sh@153 -- # waitforlisten 107883 /var/tmp/spdk2.sock 00:09:42.562 05:28:46 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:42.562 05:28:46 -- common/autotest_common.sh@819 -- # '[' -z 107883 ']' 00:09:42.562 05:28:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:42.562 05:28:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:42.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:42.562 05:28:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:42.562 05:28:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:42.562 05:28:46 -- common/autotest_common.sh@10 -- # set +x 00:09:42.562 [2024-10-07 05:28:46.452142] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:42.562 [2024-10-07 05:28:46.452354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107883 ] 00:09:42.820 [2024-10-07 05:28:46.637198] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:42.820 [2024-10-07 05:28:46.637285] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:43.078 [2024-10-07 05:28:46.999425] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:43.078 [2024-10-07 05:28:46.999786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.078 [2024-10-07 05:28:47.014670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:43.078 [2024-10-07 05:28:47.014673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:44.981 05:28:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:44.981 05:28:48 -- common/autotest_common.sh@852 -- # return 0 00:09:44.981 05:28:48 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:44.981 05:28:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:44.981 05:28:48 -- common/autotest_common.sh@10 -- # set +x 00:09:44.981 05:28:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:44.981 05:28:48 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:44.981 05:28:48 -- common/autotest_common.sh@640 -- # local es=0 00:09:44.981 05:28:48 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:44.981 05:28:48 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:09:44.981 05:28:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:44.981 05:28:48 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:09:44.981 05:28:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:44.981 05:28:48 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:44.981 05:28:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:44.981 05:28:48 -- common/autotest_common.sh@10 -- # set +x 00:09:44.981 [2024-10-07 05:28:48.786746] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 107788 has claimed it. 00:09:44.981 request: 00:09:44.981 { 00:09:44.981 "method": "framework_enable_cpumask_locks", 00:09:44.981 "req_id": 1 00:09:44.981 } 00:09:44.981 Got JSON-RPC error response 00:09:44.981 response: 00:09:44.981 { 00:09:44.981 "code": -32603, 00:09:44.981 "message": "Failed to claim CPU core: 2" 00:09:44.981 } 00:09:44.981 05:28:48 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:09:44.981 05:28:48 -- common/autotest_common.sh@643 -- # es=1 00:09:44.981 05:28:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:44.981 05:28:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:44.981 05:28:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:44.981 05:28:48 -- event/cpu_locks.sh@158 -- # waitforlisten 107788 /var/tmp/spdk.sock 00:09:44.981 05:28:48 -- common/autotest_common.sh@819 -- # '[' -z 107788 ']' 00:09:44.981 05:28:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.981 05:28:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:44.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.981 05:28:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.981 05:28:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:44.981 05:28:48 -- common/autotest_common.sh@10 -- # set +x 00:09:45.240 05:28:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:45.240 05:28:49 -- common/autotest_common.sh@852 -- # return 0 00:09:45.240 05:28:49 -- event/cpu_locks.sh@159 -- # waitforlisten 107883 /var/tmp/spdk2.sock 00:09:45.240 05:28:49 -- common/autotest_common.sh@819 -- # '[' -z 107883 ']' 00:09:45.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:45.240 05:28:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:45.240 05:28:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:45.240 05:28:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:45.240 05:28:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:45.240 05:28:49 -- common/autotest_common.sh@10 -- # set +x 00:09:45.499 05:28:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:45.499 05:28:49 -- common/autotest_common.sh@852 -- # return 0 00:09:45.499 05:28:49 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:45.499 05:28:49 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:45.499 05:28:49 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:45.499 05:28:49 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:45.499 00:09:45.499 real 0m4.701s 00:09:45.499 user 0m1.844s 00:09:45.499 sys 0m0.282s 00:09:45.499 ************************************ 00:09:45.499 END TEST locking_overlapped_coremask_via_rpc 00:09:45.499 05:28:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:45.499 05:28:49 -- common/autotest_common.sh@10 -- # set +x 00:09:45.499 ************************************ 00:09:45.499 05:28:49 -- event/cpu_locks.sh@174 -- # cleanup 00:09:45.499 05:28:49 -- event/cpu_locks.sh@15 -- # [[ -z 107788 ]] 00:09:45.499 05:28:49 -- event/cpu_locks.sh@15 -- # killprocess 107788 00:09:45.499 05:28:49 -- common/autotest_common.sh@926 -- # '[' -z 107788 ']' 00:09:45.499 05:28:49 -- common/autotest_common.sh@930 -- # kill -0 107788 00:09:45.499 05:28:49 -- common/autotest_common.sh@931 -- # uname 00:09:45.499 05:28:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:45.499 05:28:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107788 00:09:45.499 05:28:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:45.499 killing process with pid 107788 00:09:45.499 05:28:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:45.499 05:28:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107788' 00:09:45.499 05:28:49 -- common/autotest_common.sh@945 -- # kill 107788 00:09:45.499 05:28:49 -- common/autotest_common.sh@950 -- # wait 107788 00:09:48.029 05:28:51 -- event/cpu_locks.sh@16 -- # [[ -z 107883 ]] 00:09:48.029 05:28:51 -- event/cpu_locks.sh@16 -- # killprocess 107883 00:09:48.029 05:28:51 -- common/autotest_common.sh@926 -- # '[' -z 107883 ']' 00:09:48.029 05:28:51 -- common/autotest_common.sh@930 -- # kill -0 107883 00:09:48.029 05:28:51 -- common/autotest_common.sh@931 -- # uname 00:09:48.029 05:28:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:48.029 05:28:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107883 00:09:48.029 05:28:51 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:09:48.029 killing process with pid 107883 00:09:48.029 05:28:51 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:09:48.029 05:28:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107883' 00:09:48.029 05:28:51 -- common/autotest_common.sh@945 -- # kill 107883 00:09:48.029 05:28:51 -- common/autotest_common.sh@950 -- # wait 107883 00:09:49.404 05:28:53 -- event/cpu_locks.sh@18 -- # rm -f 00:09:49.404 05:28:53 -- event/cpu_locks.sh@1 -- # cleanup 00:09:49.404 05:28:53 -- event/cpu_locks.sh@15 -- # [[ -z 107788 ]] 00:09:49.404 05:28:53 -- event/cpu_locks.sh@15 -- # killprocess 107788 00:09:49.404 05:28:53 -- common/autotest_common.sh@926 -- # '[' -z 107788 ']' 00:09:49.404 05:28:53 -- common/autotest_common.sh@930 -- # kill -0 107788 00:09:49.404 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (107788) - No such process 00:09:49.404 Process with pid 107788 is not found 00:09:49.404 05:28:53 -- common/autotest_common.sh@953 -- # echo 'Process with pid 107788 is not found' 00:09:49.404 05:28:53 -- event/cpu_locks.sh@16 -- # [[ -z 107883 ]] 00:09:49.404 05:28:53 -- event/cpu_locks.sh@16 -- # killprocess 107883 00:09:49.404 05:28:53 -- common/autotest_common.sh@926 -- # '[' -z 107883 ']' 00:09:49.404 05:28:53 -- common/autotest_common.sh@930 -- # kill -0 107883 00:09:49.404 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (107883) - No such process 00:09:49.404 Process with pid 107883 is not found 00:09:49.404 05:28:53 -- common/autotest_common.sh@953 -- # echo 'Process with pid 107883 is not found' 00:09:49.404 05:28:53 -- event/cpu_locks.sh@18 -- # rm -f 00:09:49.404 ************************************ 00:09:49.404 END TEST cpu_locks 00:09:49.404 ************************************ 00:09:49.404 00:09:49.404 real 0m45.306s 00:09:49.404 user 1m20.551s 00:09:49.404 sys 0m6.526s 00:09:49.404 05:28:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.404 05:28:53 -- common/autotest_common.sh@10 -- # set +x 00:09:49.663 ************************************ 00:09:49.663 END TEST event 00:09:49.663 ************************************ 00:09:49.663 00:09:49.663 real 1m16.389s 00:09:49.663 user 2m19.697s 00:09:49.663 sys 0m10.383s 00:09:49.663 05:28:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.663 05:28:53 -- common/autotest_common.sh@10 -- # set +x 00:09:49.663 05:28:53 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:49.663 05:28:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:49.663 05:28:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:49.663 05:28:53 -- common/autotest_common.sh@10 -- # set +x 00:09:49.663 ************************************ 00:09:49.663 START TEST thread 00:09:49.663 ************************************ 00:09:49.663 05:28:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:49.663 * Looking for test storage... 00:09:49.663 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:49.663 05:28:53 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:49.663 05:28:53 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:09:49.663 05:28:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:49.663 05:28:53 -- common/autotest_common.sh@10 -- # set +x 00:09:49.663 ************************************ 00:09:49.663 START TEST thread_poller_perf 00:09:49.663 ************************************ 00:09:49.663 05:28:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:49.663 [2024-10-07 05:28:53.571575] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:49.663 [2024-10-07 05:28:53.571961] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108283 ] 00:09:49.921 [2024-10-07 05:28:53.739650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.180 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:50.180 [2024-10-07 05:28:53.936071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.575 ====================================== 00:09:51.575 busy:2214532106 (cyc) 00:09:51.575 total_run_count: 374000 00:09:51.575 tsc_hz: 2200000000 (cyc) 00:09:51.575 ====================================== 00:09:51.575 poller_cost: 5921 (cyc), 2691 (nsec) 00:09:51.575 00:09:51.575 real 0m1.729s 00:09:51.575 user 0m1.504s 00:09:51.575 sys 0m0.124s 00:09:51.575 05:28:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:51.575 05:28:55 -- common/autotest_common.sh@10 -- # set +x 00:09:51.575 ************************************ 00:09:51.575 END TEST thread_poller_perf 00:09:51.575 ************************************ 00:09:51.575 05:28:55 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:51.575 05:28:55 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:09:51.575 05:28:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:51.575 05:28:55 -- common/autotest_common.sh@10 -- # set +x 00:09:51.575 ************************************ 00:09:51.575 START TEST thread_poller_perf 00:09:51.575 ************************************ 00:09:51.575 05:28:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:51.575 [2024-10-07 05:28:55.344816] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:51.576 [2024-10-07 05:28:55.345166] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108371 ] 00:09:51.576 [2024-10-07 05:28:55.512584] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.834 [2024-10-07 05:28:55.750220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.834 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:53.209 ====================================== 00:09:53.209 busy:2204834646 (cyc) 00:09:53.209 total_run_count: 4646000 00:09:53.209 tsc_hz: 2200000000 (cyc) 00:09:53.209 ====================================== 00:09:53.209 poller_cost: 474 (cyc), 215 (nsec) 00:09:53.209 ************************************ 00:09:53.209 END TEST thread_poller_perf 00:09:53.209 ************************************ 00:09:53.209 00:09:53.209 real 0m1.789s 00:09:53.209 user 0m1.568s 00:09:53.209 sys 0m0.120s 00:09:53.209 05:28:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:53.209 05:28:57 -- common/autotest_common.sh@10 -- # set +x 00:09:53.209 05:28:57 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:09:53.209 05:28:57 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:09:53.209 05:28:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:53.209 05:28:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:53.209 05:28:57 -- common/autotest_common.sh@10 -- # set +x 00:09:53.209 ************************************ 00:09:53.209 START TEST thread_spdk_lock 00:09:53.209 ************************************ 00:09:53.209 05:28:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:09:53.468 [2024-10-07 05:28:57.188948] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:53.468 [2024-10-07 05:28:57.189128] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108513 ] 00:09:53.468 [2024-10-07 05:28:57.356879] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:53.726 [2024-10-07 05:28:57.543999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.726 [2024-10-07 05:28:57.544006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.292 [2024-10-07 05:28:58.059832] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:54.292 [2024-10-07 05:28:58.059955] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:09:54.292 [2024-10-07 05:28:58.059997] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x55fa79fc1ac0 00:09:54.292 [2024-10-07 05:28:58.067119] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:54.292 [2024-10-07 05:28:58.067222] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:54.293 [2024-10-07 05:28:58.067257] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:54.549 Starting test contend 00:09:54.549 Worker Delay Wait us Hold us Total us 00:09:54.549 0 3 132772 192485 325258 00:09:54.549 1 5 53333 296637 349970 00:09:54.549 PASS test contend 00:09:54.549 Starting test hold_by_poller 00:09:54.549 PASS test hold_by_poller 00:09:54.549 Starting test hold_by_message 00:09:54.549 PASS test hold_by_message 00:09:54.549 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:09:54.549 100014 assertions passed 00:09:54.549 0 assertions failed 00:09:54.549 00:09:54.549 real 0m1.233s 00:09:54.549 user 0m1.560s 00:09:54.549 sys 0m0.096s 00:09:54.549 05:28:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:54.549 05:28:58 -- common/autotest_common.sh@10 -- # set +x 00:09:54.549 ************************************ 00:09:54.549 END TEST thread_spdk_lock 00:09:54.549 ************************************ 00:09:54.549 00:09:54.549 real 0m4.984s 00:09:54.549 user 0m4.753s 00:09:54.549 sys 0m0.434s 00:09:54.549 05:28:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:54.549 05:28:58 -- common/autotest_common.sh@10 -- # set +x 00:09:54.549 ************************************ 00:09:54.549 END TEST thread 00:09:54.549 ************************************ 00:09:54.549 05:28:58 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:09:54.549 05:28:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:54.549 05:28:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:54.549 05:28:58 -- common/autotest_common.sh@10 -- # set +x 00:09:54.549 ************************************ 00:09:54.549 START TEST accel 00:09:54.549 ************************************ 00:09:54.549 05:28:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:09:54.807 * Looking for test storage... 00:09:54.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:09:54.807 05:28:58 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:09:54.807 05:28:58 -- accel/accel.sh@74 -- # get_expected_opcs 00:09:54.807 05:28:58 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:54.807 05:28:58 -- accel/accel.sh@59 -- # spdk_tgt_pid=108636 00:09:54.807 05:28:58 -- accel/accel.sh@60 -- # waitforlisten 108636 00:09:54.807 05:28:58 -- common/autotest_common.sh@819 -- # '[' -z 108636 ']' 00:09:54.807 05:28:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.807 05:28:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:54.807 05:28:58 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:09:54.807 05:28:58 -- accel/accel.sh@58 -- # build_accel_config 00:09:54.807 05:28:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:54.807 05:28:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.807 05:28:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:54.807 05:28:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:54.807 05:28:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:54.807 05:28:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:54.807 05:28:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:54.807 05:28:58 -- accel/accel.sh@41 -- # local IFS=, 00:09:54.807 05:28:58 -- accel/accel.sh@42 -- # jq -r . 00:09:54.807 05:28:58 -- common/autotest_common.sh@10 -- # set +x 00:09:54.807 [2024-10-07 05:28:58.611709] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:54.807 [2024-10-07 05:28:58.611905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108636 ] 00:09:54.807 [2024-10-07 05:28:58.780011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.065 [2024-10-07 05:28:58.964194] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:55.065 [2024-10-07 05:28:58.964456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.442 05:29:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:56.442 05:29:00 -- common/autotest_common.sh@852 -- # return 0 00:09:56.442 05:29:00 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:09:56.442 05:29:00 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:09:56.442 05:29:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:56.442 05:29:00 -- common/autotest_common.sh@10 -- # set +x 00:09:56.442 05:29:00 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:09:56.442 05:29:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:56.442 05:29:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:56.442 05:29:00 -- accel/accel.sh@64 -- # IFS== 00:09:56.442 05:29:00 -- accel/accel.sh@64 -- # read -r opc module 00:09:56.442 05:29:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:56.442 05:29:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:56.442 05:29:00 -- accel/accel.sh@64 -- # IFS== 00:09:56.442 05:29:00 -- accel/accel.sh@64 -- # read -r opc module 00:09:56.442 05:29:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:56.442 05:29:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:56.442 05:29:00 -- accel/accel.sh@64 -- # IFS== 00:09:56.442 05:29:00 -- accel/accel.sh@64 -- # read -r opc module 00:09:56.442 05:29:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:56.442 05:29:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:56.442 05:29:00 -- accel/accel.sh@64 -- # IFS== 00:09:56.442 05:29:00 -- accel/accel.sh@64 -- # read -r opc module 00:09:56.442 05:29:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:56.442 05:29:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:56.442 05:29:00 -- accel/accel.sh@64 -- # IFS== 00:09:56.442 05:29:00 -- accel/accel.sh@64 -- # read -r opc module 00:09:56.442 05:29:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:56.442 05:29:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:56.442 05:29:00 -- accel/accel.sh@64 -- # IFS== 00:09:56.442 05:29:00 -- accel/accel.sh@64 -- # read -r opc module 00:09:56.442 05:29:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:56.442 05:29:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:56.442 05:29:00 -- accel/accel.sh@64 -- # IFS== 00:09:56.442 05:29:00 -- accel/accel.sh@64 -- # read -r opc module 00:09:56.442 05:29:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:56.442 05:29:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:56.442 05:29:00 -- accel/accel.sh@64 -- # IFS== 00:09:56.442 05:29:00 -- accel/accel.sh@64 -- # read -r opc module 00:09:56.442 05:29:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:56.442 05:29:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:56.442 05:29:00 -- accel/accel.sh@64 -- # IFS== 00:09:56.442 05:29:00 -- accel/accel.sh@64 -- # read -r opc module 00:09:56.442 05:29:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:56.442 05:29:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:56.442 05:29:00 -- accel/accel.sh@64 -- # IFS== 00:09:56.442 05:29:00 -- accel/accel.sh@64 -- # read -r opc module 00:09:56.442 05:29:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:56.442 05:29:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:56.442 05:29:00 -- accel/accel.sh@64 -- # IFS== 00:09:56.442 05:29:00 -- accel/accel.sh@64 -- # read -r opc module 00:09:56.442 05:29:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:56.442 05:29:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:56.442 05:29:00 -- accel/accel.sh@64 -- # IFS== 00:09:56.442 05:29:00 -- accel/accel.sh@64 -- # read -r opc module 00:09:56.442 05:29:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:56.443 05:29:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:56.443 05:29:00 -- accel/accel.sh@64 -- # IFS== 00:09:56.443 05:29:00 -- accel/accel.sh@64 -- # read -r opc module 00:09:56.443 05:29:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:56.443 05:29:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:56.443 05:29:00 -- accel/accel.sh@64 -- # IFS== 00:09:56.443 05:29:00 -- accel/accel.sh@64 -- # read -r opc module 00:09:56.443 05:29:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:56.443 05:29:00 -- accel/accel.sh@67 -- # killprocess 108636 00:09:56.443 05:29:00 -- common/autotest_common.sh@926 -- # '[' -z 108636 ']' 00:09:56.443 05:29:00 -- common/autotest_common.sh@930 -- # kill -0 108636 00:09:56.443 05:29:00 -- common/autotest_common.sh@931 -- # uname 00:09:56.443 05:29:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:56.443 05:29:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 108636 00:09:56.443 05:29:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:56.443 05:29:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:56.443 05:29:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 108636' 00:09:56.443 killing process with pid 108636 00:09:56.443 05:29:00 -- common/autotest_common.sh@945 -- # kill 108636 00:09:56.443 05:29:00 -- common/autotest_common.sh@950 -- # wait 108636 00:09:58.344 05:29:02 -- accel/accel.sh@68 -- # trap - ERR 00:09:58.344 05:29:02 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:09:58.344 05:29:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:58.344 05:29:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:58.344 05:29:02 -- common/autotest_common.sh@10 -- # set +x 00:09:58.344 05:29:02 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:09:58.344 05:29:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:09:58.344 05:29:02 -- accel/accel.sh@12 -- # build_accel_config 00:09:58.344 05:29:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:58.344 05:29:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:58.344 05:29:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:58.344 05:29:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:58.344 05:29:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:58.344 05:29:02 -- accel/accel.sh@41 -- # local IFS=, 00:09:58.344 05:29:02 -- accel/accel.sh@42 -- # jq -r . 00:09:58.344 05:29:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:58.344 05:29:02 -- common/autotest_common.sh@10 -- # set +x 00:09:58.344 05:29:02 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:09:58.344 05:29:02 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:58.344 05:29:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:58.344 05:29:02 -- common/autotest_common.sh@10 -- # set +x 00:09:58.345 ************************************ 00:09:58.345 START TEST accel_missing_filename 00:09:58.345 ************************************ 00:09:58.345 05:29:02 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:09:58.345 05:29:02 -- common/autotest_common.sh@640 -- # local es=0 00:09:58.345 05:29:02 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:09:58.345 05:29:02 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:09:58.345 05:29:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:58.345 05:29:02 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:09:58.345 05:29:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:58.345 05:29:02 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:09:58.345 05:29:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:09:58.345 05:29:02 -- accel/accel.sh@12 -- # build_accel_config 00:09:58.345 05:29:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:58.345 05:29:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:58.345 05:29:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:58.345 05:29:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:58.345 05:29:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:58.345 05:29:02 -- accel/accel.sh@41 -- # local IFS=, 00:09:58.345 05:29:02 -- accel/accel.sh@42 -- # jq -r . 00:09:58.603 [2024-10-07 05:29:02.342231] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:58.603 [2024-10-07 05:29:02.342412] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108825 ] 00:09:58.603 [2024-10-07 05:29:02.511173] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.862 [2024-10-07 05:29:02.673313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.121 [2024-10-07 05:29:02.839311] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:59.380 [2024-10-07 05:29:03.234066] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:09:59.638 A filename is required. 00:09:59.638 ************************************ 00:09:59.638 END TEST accel_missing_filename 00:09:59.638 ************************************ 00:09:59.638 05:29:03 -- common/autotest_common.sh@643 -- # es=234 00:09:59.638 05:29:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:59.638 05:29:03 -- common/autotest_common.sh@652 -- # es=106 00:09:59.638 05:29:03 -- common/autotest_common.sh@653 -- # case "$es" in 00:09:59.638 05:29:03 -- common/autotest_common.sh@660 -- # es=1 00:09:59.638 05:29:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:59.638 00:09:59.638 real 0m1.270s 00:09:59.638 user 0m1.041s 00:09:59.638 sys 0m0.170s 00:09:59.638 05:29:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:59.638 05:29:03 -- common/autotest_common.sh@10 -- # set +x 00:09:59.638 05:29:03 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:59.638 05:29:03 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:09:59.638 05:29:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:59.638 05:29:03 -- common/autotest_common.sh@10 -- # set +x 00:09:59.896 ************************************ 00:09:59.896 START TEST accel_compress_verify 00:09:59.896 ************************************ 00:09:59.896 05:29:03 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:59.896 05:29:03 -- common/autotest_common.sh@640 -- # local es=0 00:09:59.896 05:29:03 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:59.896 05:29:03 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:09:59.896 05:29:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:59.896 05:29:03 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:09:59.896 05:29:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:59.896 05:29:03 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:59.896 05:29:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:59.896 05:29:03 -- accel/accel.sh@12 -- # build_accel_config 00:09:59.896 05:29:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:59.896 05:29:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:59.896 05:29:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:59.896 05:29:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:59.896 05:29:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:59.896 05:29:03 -- accel/accel.sh@41 -- # local IFS=, 00:09:59.896 05:29:03 -- accel/accel.sh@42 -- # jq -r . 00:09:59.896 [2024-10-07 05:29:03.676473] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:59.896 [2024-10-07 05:29:03.677109] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108880 ] 00:09:59.896 [2024-10-07 05:29:03.832433] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.154 [2024-10-07 05:29:04.023405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.411 [2024-10-07 05:29:04.200402] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:00.669 [2024-10-07 05:29:04.609464] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:10:01.237 00:10:01.237 Compression does not support the verify option, aborting. 00:10:01.237 05:29:04 -- common/autotest_common.sh@643 -- # es=161 00:10:01.237 05:29:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:01.237 05:29:04 -- common/autotest_common.sh@652 -- # es=33 00:10:01.237 05:29:04 -- common/autotest_common.sh@653 -- # case "$es" in 00:10:01.237 05:29:04 -- common/autotest_common.sh@660 -- # es=1 00:10:01.237 05:29:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:01.237 00:10:01.237 real 0m1.297s 00:10:01.237 user 0m1.089s 00:10:01.237 sys 0m0.157s 00:10:01.237 05:29:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:01.237 05:29:04 -- common/autotest_common.sh@10 -- # set +x 00:10:01.237 ************************************ 00:10:01.237 END TEST accel_compress_verify 00:10:01.237 ************************************ 00:10:01.237 05:29:04 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:10:01.237 05:29:04 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:01.237 05:29:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:01.237 05:29:04 -- common/autotest_common.sh@10 -- # set +x 00:10:01.237 ************************************ 00:10:01.237 START TEST accel_wrong_workload 00:10:01.237 ************************************ 00:10:01.237 05:29:04 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:10:01.237 05:29:04 -- common/autotest_common.sh@640 -- # local es=0 00:10:01.237 05:29:04 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:10:01.237 05:29:04 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:01.237 05:29:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:01.237 05:29:04 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:01.237 05:29:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:01.237 05:29:04 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:10:01.237 05:29:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:10:01.237 05:29:04 -- accel/accel.sh@12 -- # build_accel_config 00:10:01.237 05:29:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:01.237 05:29:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:01.237 05:29:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:01.237 05:29:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:01.237 05:29:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:01.237 05:29:04 -- accel/accel.sh@41 -- # local IFS=, 00:10:01.237 05:29:04 -- accel/accel.sh@42 -- # jq -r . 00:10:01.237 Unsupported workload type: foobar 00:10:01.237 [2024-10-07 05:29:05.021526] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:10:01.237 accel_perf options: 00:10:01.237 [-h help message] 00:10:01.237 [-q queue depth per core] 00:10:01.237 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:01.237 [-T number of threads per core 00:10:01.237 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:01.237 [-t time in seconds] 00:10:01.237 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:01.237 [ dif_verify, , dif_generate, dif_generate_copy 00:10:01.237 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:01.237 [-l for compress/decompress workloads, name of uncompressed input file 00:10:01.237 [-S for crc32c workload, use this seed value (default 0) 00:10:01.237 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:01.237 [-f for fill workload, use this BYTE value (default 255) 00:10:01.237 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:01.237 [-y verify result if this switch is on] 00:10:01.237 [-a tasks to allocate per core (default: same value as -q)] 00:10:01.237 Can be used to spread operations across a wider range of memory. 00:10:01.237 05:29:05 -- common/autotest_common.sh@643 -- # es=1 00:10:01.237 05:29:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:01.237 05:29:05 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:01.237 05:29:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:01.237 00:10:01.237 real 0m0.063s 00:10:01.237 user 0m0.073s 00:10:01.237 sys 0m0.041s 00:10:01.237 05:29:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:01.237 05:29:05 -- common/autotest_common.sh@10 -- # set +x 00:10:01.237 ************************************ 00:10:01.237 END TEST accel_wrong_workload 00:10:01.237 ************************************ 00:10:01.237 05:29:05 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:10:01.237 05:29:05 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:10:01.237 05:29:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:01.237 05:29:05 -- common/autotest_common.sh@10 -- # set +x 00:10:01.237 ************************************ 00:10:01.237 START TEST accel_negative_buffers 00:10:01.237 ************************************ 00:10:01.237 05:29:05 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:10:01.237 05:29:05 -- common/autotest_common.sh@640 -- # local es=0 00:10:01.237 05:29:05 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:10:01.237 05:29:05 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:01.237 05:29:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:01.237 05:29:05 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:01.237 05:29:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:01.237 05:29:05 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:10:01.237 05:29:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:10:01.237 05:29:05 -- accel/accel.sh@12 -- # build_accel_config 00:10:01.237 05:29:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:01.237 05:29:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:01.237 05:29:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:01.237 05:29:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:01.237 05:29:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:01.237 05:29:05 -- accel/accel.sh@41 -- # local IFS=, 00:10:01.237 05:29:05 -- accel/accel.sh@42 -- # jq -r . 00:10:01.237 -x option must be non-negative. 00:10:01.237 [2024-10-07 05:29:05.137495] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:10:01.237 accel_perf options: 00:10:01.237 [-h help message] 00:10:01.237 [-q queue depth per core] 00:10:01.237 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:01.237 [-T number of threads per core 00:10:01.237 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:01.237 [-t time in seconds] 00:10:01.237 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:01.237 [ dif_verify, , dif_generate, dif_generate_copy 00:10:01.237 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:01.237 [-l for compress/decompress workloads, name of uncompressed input file 00:10:01.237 [-S for crc32c workload, use this seed value (default 0) 00:10:01.237 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:01.237 [-f for fill workload, use this BYTE value (default 255) 00:10:01.237 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:01.237 [-y verify result if this switch is on] 00:10:01.237 [-a tasks to allocate per core (default: same value as -q)] 00:10:01.237 Can be used to spread operations across a wider range of memory. 00:10:01.237 05:29:05 -- common/autotest_common.sh@643 -- # es=1 00:10:01.237 05:29:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:01.237 05:29:05 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:01.237 05:29:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:01.237 00:10:01.237 real 0m0.069s 00:10:01.237 user 0m0.074s 00:10:01.237 sys 0m0.041s 00:10:01.237 05:29:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:01.237 05:29:05 -- common/autotest_common.sh@10 -- # set +x 00:10:01.237 ************************************ 00:10:01.237 END TEST accel_negative_buffers 00:10:01.237 ************************************ 00:10:01.237 05:29:05 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:10:01.237 05:29:05 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:01.237 05:29:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:01.237 05:29:05 -- common/autotest_common.sh@10 -- # set +x 00:10:01.237 ************************************ 00:10:01.237 START TEST accel_crc32c 00:10:01.237 ************************************ 00:10:01.238 05:29:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:10:01.238 05:29:05 -- accel/accel.sh@16 -- # local accel_opc 00:10:01.238 05:29:05 -- accel/accel.sh@17 -- # local accel_module 00:10:01.238 05:29:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:01.238 05:29:05 -- accel/accel.sh@12 -- # build_accel_config 00:10:01.238 05:29:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:01.238 05:29:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:01.238 05:29:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:01.238 05:29:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:01.238 05:29:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:01.238 05:29:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:01.238 05:29:05 -- accel/accel.sh@41 -- # local IFS=, 00:10:01.238 05:29:05 -- accel/accel.sh@42 -- # jq -r . 00:10:01.496 [2024-10-07 05:29:05.246024] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:01.496 [2024-10-07 05:29:05.246216] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109051 ] 00:10:01.496 [2024-10-07 05:29:05.412755] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.766 [2024-10-07 05:29:05.600337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.683 05:29:07 -- accel/accel.sh@18 -- # out=' 00:10:03.683 SPDK Configuration: 00:10:03.683 Core mask: 0x1 00:10:03.683 00:10:03.683 Accel Perf Configuration: 00:10:03.683 Workload Type: crc32c 00:10:03.683 CRC-32C seed: 32 00:10:03.683 Transfer size: 4096 bytes 00:10:03.683 Vector count 1 00:10:03.683 Module: software 00:10:03.683 Queue depth: 32 00:10:03.683 Allocate depth: 32 00:10:03.683 # threads/core: 1 00:10:03.683 Run time: 1 seconds 00:10:03.683 Verify: Yes 00:10:03.683 00:10:03.683 Running for 1 seconds... 00:10:03.684 00:10:03.684 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:03.684 ------------------------------------------------------------------------------------ 00:10:03.684 0,0 511360/s 1997 MiB/s 0 0 00:10:03.684 ==================================================================================== 00:10:03.684 Total 511360/s 1997 MiB/s 0 0' 00:10:03.684 05:29:07 -- accel/accel.sh@20 -- # IFS=: 00:10:03.684 05:29:07 -- accel/accel.sh@20 -- # read -r var val 00:10:03.684 05:29:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:03.684 05:29:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:03.684 05:29:07 -- accel/accel.sh@12 -- # build_accel_config 00:10:03.684 05:29:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:03.684 05:29:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:03.684 05:29:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:03.684 05:29:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:03.684 05:29:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:03.684 05:29:07 -- accel/accel.sh@41 -- # local IFS=, 00:10:03.684 05:29:07 -- accel/accel.sh@42 -- # jq -r . 00:10:03.684 [2024-10-07 05:29:07.533901] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:03.684 [2024-10-07 05:29:07.534104] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109141 ] 00:10:03.942 [2024-10-07 05:29:07.695383] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.942 [2024-10-07 05:29:07.885291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.203 05:29:08 -- accel/accel.sh@21 -- # val= 00:10:04.203 05:29:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # IFS=: 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # read -r var val 00:10:04.203 05:29:08 -- accel/accel.sh@21 -- # val= 00:10:04.203 05:29:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # IFS=: 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # read -r var val 00:10:04.203 05:29:08 -- accel/accel.sh@21 -- # val=0x1 00:10:04.203 05:29:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # IFS=: 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # read -r var val 00:10:04.203 05:29:08 -- accel/accel.sh@21 -- # val= 00:10:04.203 05:29:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # IFS=: 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # read -r var val 00:10:04.203 05:29:08 -- accel/accel.sh@21 -- # val= 00:10:04.203 05:29:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # IFS=: 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # read -r var val 00:10:04.203 05:29:08 -- accel/accel.sh@21 -- # val=crc32c 00:10:04.203 05:29:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.203 05:29:08 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # IFS=: 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # read -r var val 00:10:04.203 05:29:08 -- accel/accel.sh@21 -- # val=32 00:10:04.203 05:29:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # IFS=: 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # read -r var val 00:10:04.203 05:29:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:04.203 05:29:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # IFS=: 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # read -r var val 00:10:04.203 05:29:08 -- accel/accel.sh@21 -- # val= 00:10:04.203 05:29:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # IFS=: 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # read -r var val 00:10:04.203 05:29:08 -- accel/accel.sh@21 -- # val=software 00:10:04.203 05:29:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.203 05:29:08 -- accel/accel.sh@23 -- # accel_module=software 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # IFS=: 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # read -r var val 00:10:04.203 05:29:08 -- accel/accel.sh@21 -- # val=32 00:10:04.203 05:29:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # IFS=: 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # read -r var val 00:10:04.203 05:29:08 -- accel/accel.sh@21 -- # val=32 00:10:04.203 05:29:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # IFS=: 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # read -r var val 00:10:04.203 05:29:08 -- accel/accel.sh@21 -- # val=1 00:10:04.203 05:29:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # IFS=: 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # read -r var val 00:10:04.203 05:29:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:04.203 05:29:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # IFS=: 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # read -r var val 00:10:04.203 05:29:08 -- accel/accel.sh@21 -- # val=Yes 00:10:04.203 05:29:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # IFS=: 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # read -r var val 00:10:04.203 05:29:08 -- accel/accel.sh@21 -- # val= 00:10:04.203 05:29:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # IFS=: 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # read -r var val 00:10:04.203 05:29:08 -- accel/accel.sh@21 -- # val= 00:10:04.203 05:29:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # IFS=: 00:10:04.203 05:29:08 -- accel/accel.sh@20 -- # read -r var val 00:10:06.104 05:29:09 -- accel/accel.sh@21 -- # val= 00:10:06.104 05:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.104 05:29:09 -- accel/accel.sh@20 -- # IFS=: 00:10:06.104 05:29:09 -- accel/accel.sh@20 -- # read -r var val 00:10:06.104 05:29:09 -- accel/accel.sh@21 -- # val= 00:10:06.104 05:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.104 05:29:09 -- accel/accel.sh@20 -- # IFS=: 00:10:06.104 05:29:09 -- accel/accel.sh@20 -- # read -r var val 00:10:06.104 05:29:09 -- accel/accel.sh@21 -- # val= 00:10:06.104 05:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.104 05:29:09 -- accel/accel.sh@20 -- # IFS=: 00:10:06.104 05:29:09 -- accel/accel.sh@20 -- # read -r var val 00:10:06.104 05:29:09 -- accel/accel.sh@21 -- # val= 00:10:06.104 05:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.104 05:29:09 -- accel/accel.sh@20 -- # IFS=: 00:10:06.104 05:29:09 -- accel/accel.sh@20 -- # read -r var val 00:10:06.104 05:29:09 -- accel/accel.sh@21 -- # val= 00:10:06.104 05:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.104 05:29:09 -- accel/accel.sh@20 -- # IFS=: 00:10:06.104 05:29:09 -- accel/accel.sh@20 -- # read -r var val 00:10:06.104 05:29:09 -- accel/accel.sh@21 -- # val= 00:10:06.104 05:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:06.104 05:29:09 -- accel/accel.sh@20 -- # IFS=: 00:10:06.104 05:29:09 -- accel/accel.sh@20 -- # read -r var val 00:10:06.104 05:29:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:06.104 05:29:09 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:06.104 05:29:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:06.104 00:10:06.104 real 0m4.582s 00:10:06.104 user 0m4.086s 00:10:06.104 sys 0m0.322s 00:10:06.104 ************************************ 00:10:06.104 END TEST accel_crc32c 00:10:06.104 05:29:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:06.104 05:29:09 -- common/autotest_common.sh@10 -- # set +x 00:10:06.104 ************************************ 00:10:06.104 05:29:09 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:10:06.104 05:29:09 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:06.104 05:29:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:06.104 05:29:09 -- common/autotest_common.sh@10 -- # set +x 00:10:06.104 ************************************ 00:10:06.104 START TEST accel_crc32c_C2 00:10:06.104 ************************************ 00:10:06.104 05:29:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:10:06.104 05:29:09 -- accel/accel.sh@16 -- # local accel_opc 00:10:06.104 05:29:09 -- accel/accel.sh@17 -- # local accel_module 00:10:06.104 05:29:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:06.104 05:29:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:06.104 05:29:09 -- accel/accel.sh@12 -- # build_accel_config 00:10:06.104 05:29:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:06.104 05:29:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:06.104 05:29:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:06.104 05:29:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:06.104 05:29:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:06.104 05:29:09 -- accel/accel.sh@41 -- # local IFS=, 00:10:06.104 05:29:09 -- accel/accel.sh@42 -- # jq -r . 00:10:06.104 [2024-10-07 05:29:09.867864] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:06.104 [2024-10-07 05:29:09.868542] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109245 ] 00:10:06.104 [2024-10-07 05:29:10.023938] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.363 [2024-10-07 05:29:10.198126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.264 05:29:12 -- accel/accel.sh@18 -- # out=' 00:10:08.264 SPDK Configuration: 00:10:08.264 Core mask: 0x1 00:10:08.264 00:10:08.264 Accel Perf Configuration: 00:10:08.264 Workload Type: crc32c 00:10:08.264 CRC-32C seed: 0 00:10:08.264 Transfer size: 4096 bytes 00:10:08.264 Vector count 2 00:10:08.264 Module: software 00:10:08.264 Queue depth: 32 00:10:08.264 Allocate depth: 32 00:10:08.264 # threads/core: 1 00:10:08.264 Run time: 1 seconds 00:10:08.264 Verify: Yes 00:10:08.264 00:10:08.264 Running for 1 seconds... 00:10:08.264 00:10:08.264 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:08.264 ------------------------------------------------------------------------------------ 00:10:08.264 0,0 394464/s 3081 MiB/s 0 0 00:10:08.264 ==================================================================================== 00:10:08.264 Total 394464/s 1540 MiB/s 0 0' 00:10:08.264 05:29:12 -- accel/accel.sh@20 -- # IFS=: 00:10:08.264 05:29:12 -- accel/accel.sh@20 -- # read -r var val 00:10:08.264 05:29:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:08.264 05:29:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:08.264 05:29:12 -- accel/accel.sh@12 -- # build_accel_config 00:10:08.264 05:29:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:08.264 05:29:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:08.264 05:29:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:08.264 05:29:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:08.264 05:29:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:08.264 05:29:12 -- accel/accel.sh@41 -- # local IFS=, 00:10:08.264 05:29:12 -- accel/accel.sh@42 -- # jq -r . 00:10:08.264 [2024-10-07 05:29:12.159949] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:08.264 [2024-10-07 05:29:12.160427] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109342 ] 00:10:08.523 [2024-10-07 05:29:12.328107] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.781 [2024-10-07 05:29:12.525382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.781 05:29:12 -- accel/accel.sh@21 -- # val= 00:10:08.782 05:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # IFS=: 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # read -r var val 00:10:08.782 05:29:12 -- accel/accel.sh@21 -- # val= 00:10:08.782 05:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # IFS=: 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # read -r var val 00:10:08.782 05:29:12 -- accel/accel.sh@21 -- # val=0x1 00:10:08.782 05:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # IFS=: 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # read -r var val 00:10:08.782 05:29:12 -- accel/accel.sh@21 -- # val= 00:10:08.782 05:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # IFS=: 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # read -r var val 00:10:08.782 05:29:12 -- accel/accel.sh@21 -- # val= 00:10:08.782 05:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # IFS=: 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # read -r var val 00:10:08.782 05:29:12 -- accel/accel.sh@21 -- # val=crc32c 00:10:08.782 05:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.782 05:29:12 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # IFS=: 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # read -r var val 00:10:08.782 05:29:12 -- accel/accel.sh@21 -- # val=0 00:10:08.782 05:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # IFS=: 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # read -r var val 00:10:08.782 05:29:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:08.782 05:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # IFS=: 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # read -r var val 00:10:08.782 05:29:12 -- accel/accel.sh@21 -- # val= 00:10:08.782 05:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # IFS=: 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # read -r var val 00:10:08.782 05:29:12 -- accel/accel.sh@21 -- # val=software 00:10:08.782 05:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.782 05:29:12 -- accel/accel.sh@23 -- # accel_module=software 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # IFS=: 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # read -r var val 00:10:08.782 05:29:12 -- accel/accel.sh@21 -- # val=32 00:10:08.782 05:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # IFS=: 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # read -r var val 00:10:08.782 05:29:12 -- accel/accel.sh@21 -- # val=32 00:10:08.782 05:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # IFS=: 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # read -r var val 00:10:08.782 05:29:12 -- accel/accel.sh@21 -- # val=1 00:10:08.782 05:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # IFS=: 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # read -r var val 00:10:08.782 05:29:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:08.782 05:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # IFS=: 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # read -r var val 00:10:08.782 05:29:12 -- accel/accel.sh@21 -- # val=Yes 00:10:08.782 05:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # IFS=: 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # read -r var val 00:10:08.782 05:29:12 -- accel/accel.sh@21 -- # val= 00:10:08.782 05:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # IFS=: 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # read -r var val 00:10:08.782 05:29:12 -- accel/accel.sh@21 -- # val= 00:10:08.782 05:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # IFS=: 00:10:08.782 05:29:12 -- accel/accel.sh@20 -- # read -r var val 00:10:10.685 05:29:14 -- accel/accel.sh@21 -- # val= 00:10:10.685 05:29:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.685 05:29:14 -- accel/accel.sh@20 -- # IFS=: 00:10:10.685 05:29:14 -- accel/accel.sh@20 -- # read -r var val 00:10:10.685 05:29:14 -- accel/accel.sh@21 -- # val= 00:10:10.685 05:29:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.685 05:29:14 -- accel/accel.sh@20 -- # IFS=: 00:10:10.685 05:29:14 -- accel/accel.sh@20 -- # read -r var val 00:10:10.685 05:29:14 -- accel/accel.sh@21 -- # val= 00:10:10.685 05:29:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.685 05:29:14 -- accel/accel.sh@20 -- # IFS=: 00:10:10.685 05:29:14 -- accel/accel.sh@20 -- # read -r var val 00:10:10.685 05:29:14 -- accel/accel.sh@21 -- # val= 00:10:10.685 05:29:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.685 05:29:14 -- accel/accel.sh@20 -- # IFS=: 00:10:10.685 05:29:14 -- accel/accel.sh@20 -- # read -r var val 00:10:10.685 05:29:14 -- accel/accel.sh@21 -- # val= 00:10:10.685 05:29:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.685 05:29:14 -- accel/accel.sh@20 -- # IFS=: 00:10:10.685 05:29:14 -- accel/accel.sh@20 -- # read -r var val 00:10:10.685 05:29:14 -- accel/accel.sh@21 -- # val= 00:10:10.685 05:29:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.685 05:29:14 -- accel/accel.sh@20 -- # IFS=: 00:10:10.685 05:29:14 -- accel/accel.sh@20 -- # read -r var val 00:10:10.685 05:29:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:10.685 05:29:14 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:10.685 05:29:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:10.685 00:10:10.685 real 0m4.611s 00:10:10.685 user 0m4.093s 00:10:10.685 sys 0m0.333s 00:10:10.685 05:29:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:10.685 ************************************ 00:10:10.685 END TEST accel_crc32c_C2 00:10:10.685 ************************************ 00:10:10.685 05:29:14 -- common/autotest_common.sh@10 -- # set +x 00:10:10.685 05:29:14 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:10:10.685 05:29:14 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:10.685 05:29:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:10.685 05:29:14 -- common/autotest_common.sh@10 -- # set +x 00:10:10.685 ************************************ 00:10:10.685 START TEST accel_copy 00:10:10.685 ************************************ 00:10:10.685 05:29:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:10:10.685 05:29:14 -- accel/accel.sh@16 -- # local accel_opc 00:10:10.685 05:29:14 -- accel/accel.sh@17 -- # local accel_module 00:10:10.685 05:29:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:10:10.685 05:29:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:10.685 05:29:14 -- accel/accel.sh@12 -- # build_accel_config 00:10:10.685 05:29:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:10.685 05:29:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:10.685 05:29:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:10.685 05:29:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:10.685 05:29:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:10.685 05:29:14 -- accel/accel.sh@41 -- # local IFS=, 00:10:10.685 05:29:14 -- accel/accel.sh@42 -- # jq -r . 00:10:10.685 [2024-10-07 05:29:14.535238] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:10.685 [2024-10-07 05:29:14.535600] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109482 ] 00:10:10.944 [2024-10-07 05:29:14.703834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.944 [2024-10-07 05:29:14.878200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.845 05:29:16 -- accel/accel.sh@18 -- # out=' 00:10:12.845 SPDK Configuration: 00:10:12.845 Core mask: 0x1 00:10:12.845 00:10:12.845 Accel Perf Configuration: 00:10:12.845 Workload Type: copy 00:10:12.845 Transfer size: 4096 bytes 00:10:12.845 Vector count 1 00:10:12.845 Module: software 00:10:12.845 Queue depth: 32 00:10:12.845 Allocate depth: 32 00:10:12.845 # threads/core: 1 00:10:12.845 Run time: 1 seconds 00:10:12.845 Verify: Yes 00:10:12.845 00:10:12.845 Running for 1 seconds... 00:10:12.845 00:10:12.845 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:12.845 ------------------------------------------------------------------------------------ 00:10:12.845 0,0 306688/s 1198 MiB/s 0 0 00:10:12.845 ==================================================================================== 00:10:12.845 Total 306688/s 1198 MiB/s 0 0' 00:10:12.845 05:29:16 -- accel/accel.sh@20 -- # IFS=: 00:10:12.845 05:29:16 -- accel/accel.sh@20 -- # read -r var val 00:10:12.845 05:29:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:10:12.845 05:29:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:12.845 05:29:16 -- accel/accel.sh@12 -- # build_accel_config 00:10:12.845 05:29:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:12.845 05:29:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:12.845 05:29:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:12.845 05:29:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:12.845 05:29:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:12.845 05:29:16 -- accel/accel.sh@41 -- # local IFS=, 00:10:12.845 05:29:16 -- accel/accel.sh@42 -- # jq -r . 00:10:12.845 [2024-10-07 05:29:16.802298] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:12.845 [2024-10-07 05:29:16.803060] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109557 ] 00:10:13.103 [2024-10-07 05:29:16.955325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.362 [2024-10-07 05:29:17.147330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.362 05:29:17 -- accel/accel.sh@21 -- # val= 00:10:13.362 05:29:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # IFS=: 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # read -r var val 00:10:13.362 05:29:17 -- accel/accel.sh@21 -- # val= 00:10:13.362 05:29:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # IFS=: 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # read -r var val 00:10:13.362 05:29:17 -- accel/accel.sh@21 -- # val=0x1 00:10:13.362 05:29:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # IFS=: 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # read -r var val 00:10:13.362 05:29:17 -- accel/accel.sh@21 -- # val= 00:10:13.362 05:29:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # IFS=: 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # read -r var val 00:10:13.362 05:29:17 -- accel/accel.sh@21 -- # val= 00:10:13.362 05:29:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # IFS=: 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # read -r var val 00:10:13.362 05:29:17 -- accel/accel.sh@21 -- # val=copy 00:10:13.362 05:29:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.362 05:29:17 -- accel/accel.sh@24 -- # accel_opc=copy 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # IFS=: 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # read -r var val 00:10:13.362 05:29:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:13.362 05:29:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # IFS=: 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # read -r var val 00:10:13.362 05:29:17 -- accel/accel.sh@21 -- # val= 00:10:13.362 05:29:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # IFS=: 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # read -r var val 00:10:13.362 05:29:17 -- accel/accel.sh@21 -- # val=software 00:10:13.362 05:29:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.362 05:29:17 -- accel/accel.sh@23 -- # accel_module=software 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # IFS=: 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # read -r var val 00:10:13.362 05:29:17 -- accel/accel.sh@21 -- # val=32 00:10:13.362 05:29:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # IFS=: 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # read -r var val 00:10:13.362 05:29:17 -- accel/accel.sh@21 -- # val=32 00:10:13.362 05:29:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # IFS=: 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # read -r var val 00:10:13.362 05:29:17 -- accel/accel.sh@21 -- # val=1 00:10:13.362 05:29:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # IFS=: 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # read -r var val 00:10:13.362 05:29:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:13.362 05:29:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # IFS=: 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # read -r var val 00:10:13.362 05:29:17 -- accel/accel.sh@21 -- # val=Yes 00:10:13.362 05:29:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # IFS=: 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # read -r var val 00:10:13.362 05:29:17 -- accel/accel.sh@21 -- # val= 00:10:13.362 05:29:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # IFS=: 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # read -r var val 00:10:13.362 05:29:17 -- accel/accel.sh@21 -- # val= 00:10:13.362 05:29:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # IFS=: 00:10:13.362 05:29:17 -- accel/accel.sh@20 -- # read -r var val 00:10:15.263 05:29:19 -- accel/accel.sh@21 -- # val= 00:10:15.263 05:29:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.263 05:29:19 -- accel/accel.sh@20 -- # IFS=: 00:10:15.263 05:29:19 -- accel/accel.sh@20 -- # read -r var val 00:10:15.263 05:29:19 -- accel/accel.sh@21 -- # val= 00:10:15.263 05:29:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.263 05:29:19 -- accel/accel.sh@20 -- # IFS=: 00:10:15.263 05:29:19 -- accel/accel.sh@20 -- # read -r var val 00:10:15.263 05:29:19 -- accel/accel.sh@21 -- # val= 00:10:15.263 05:29:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.263 05:29:19 -- accel/accel.sh@20 -- # IFS=: 00:10:15.264 05:29:19 -- accel/accel.sh@20 -- # read -r var val 00:10:15.264 05:29:19 -- accel/accel.sh@21 -- # val= 00:10:15.264 05:29:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.264 05:29:19 -- accel/accel.sh@20 -- # IFS=: 00:10:15.264 05:29:19 -- accel/accel.sh@20 -- # read -r var val 00:10:15.264 05:29:19 -- accel/accel.sh@21 -- # val= 00:10:15.264 05:29:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.264 05:29:19 -- accel/accel.sh@20 -- # IFS=: 00:10:15.264 05:29:19 -- accel/accel.sh@20 -- # read -r var val 00:10:15.264 05:29:19 -- accel/accel.sh@21 -- # val= 00:10:15.264 05:29:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.264 05:29:19 -- accel/accel.sh@20 -- # IFS=: 00:10:15.264 05:29:19 -- accel/accel.sh@20 -- # read -r var val 00:10:15.264 05:29:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:15.264 05:29:19 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:10:15.264 05:29:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:15.264 00:10:15.264 real 0m4.566s 00:10:15.264 user 0m4.069s 00:10:15.264 sys 0m0.313s 00:10:15.264 05:29:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:15.264 ************************************ 00:10:15.264 END TEST accel_copy 00:10:15.264 ************************************ 00:10:15.264 05:29:19 -- common/autotest_common.sh@10 -- # set +x 00:10:15.264 05:29:19 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:15.264 05:29:19 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:15.264 05:29:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:15.264 05:29:19 -- common/autotest_common.sh@10 -- # set +x 00:10:15.264 ************************************ 00:10:15.264 START TEST accel_fill 00:10:15.264 ************************************ 00:10:15.264 05:29:19 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:15.264 05:29:19 -- accel/accel.sh@16 -- # local accel_opc 00:10:15.264 05:29:19 -- accel/accel.sh@17 -- # local accel_module 00:10:15.264 05:29:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:15.264 05:29:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:15.264 05:29:19 -- accel/accel.sh@12 -- # build_accel_config 00:10:15.264 05:29:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:15.264 05:29:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:15.264 05:29:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:15.264 05:29:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:15.264 05:29:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:15.264 05:29:19 -- accel/accel.sh@41 -- # local IFS=, 00:10:15.264 05:29:19 -- accel/accel.sh@42 -- # jq -r . 00:10:15.264 [2024-10-07 05:29:19.154845] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:15.264 [2024-10-07 05:29:19.155049] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109653 ] 00:10:15.524 [2024-10-07 05:29:19.323518] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.524 [2024-10-07 05:29:19.501056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.053 05:29:21 -- accel/accel.sh@18 -- # out=' 00:10:18.053 SPDK Configuration: 00:10:18.053 Core mask: 0x1 00:10:18.053 00:10:18.053 Accel Perf Configuration: 00:10:18.053 Workload Type: fill 00:10:18.053 Fill pattern: 0x80 00:10:18.053 Transfer size: 4096 bytes 00:10:18.053 Vector count 1 00:10:18.053 Module: software 00:10:18.053 Queue depth: 64 00:10:18.053 Allocate depth: 64 00:10:18.053 # threads/core: 1 00:10:18.053 Run time: 1 seconds 00:10:18.053 Verify: Yes 00:10:18.053 00:10:18.053 Running for 1 seconds... 00:10:18.053 00:10:18.053 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:18.053 ------------------------------------------------------------------------------------ 00:10:18.053 0,0 452096/s 1766 MiB/s 0 0 00:10:18.053 ==================================================================================== 00:10:18.053 Total 452096/s 1766 MiB/s 0 0' 00:10:18.053 05:29:21 -- accel/accel.sh@20 -- # IFS=: 00:10:18.054 05:29:21 -- accel/accel.sh@20 -- # read -r var val 00:10:18.054 05:29:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:18.054 05:29:21 -- accel/accel.sh@12 -- # build_accel_config 00:10:18.054 05:29:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:18.054 05:29:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:18.054 05:29:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:18.054 05:29:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:18.054 05:29:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:18.054 05:29:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:18.054 05:29:21 -- accel/accel.sh@41 -- # local IFS=, 00:10:18.054 05:29:21 -- accel/accel.sh@42 -- # jq -r . 00:10:18.054 [2024-10-07 05:29:21.464016] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:18.054 [2024-10-07 05:29:21.464202] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109796 ] 00:10:18.054 [2024-10-07 05:29:21.628372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.054 [2024-10-07 05:29:21.832850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.054 05:29:22 -- accel/accel.sh@21 -- # val= 00:10:18.054 05:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # IFS=: 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # read -r var val 00:10:18.054 05:29:22 -- accel/accel.sh@21 -- # val= 00:10:18.054 05:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # IFS=: 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # read -r var val 00:10:18.054 05:29:22 -- accel/accel.sh@21 -- # val=0x1 00:10:18.054 05:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # IFS=: 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # read -r var val 00:10:18.054 05:29:22 -- accel/accel.sh@21 -- # val= 00:10:18.054 05:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # IFS=: 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # read -r var val 00:10:18.054 05:29:22 -- accel/accel.sh@21 -- # val= 00:10:18.054 05:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # IFS=: 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # read -r var val 00:10:18.054 05:29:22 -- accel/accel.sh@21 -- # val=fill 00:10:18.054 05:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.054 05:29:22 -- accel/accel.sh@24 -- # accel_opc=fill 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # IFS=: 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # read -r var val 00:10:18.054 05:29:22 -- accel/accel.sh@21 -- # val=0x80 00:10:18.054 05:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # IFS=: 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # read -r var val 00:10:18.054 05:29:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:18.054 05:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # IFS=: 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # read -r var val 00:10:18.054 05:29:22 -- accel/accel.sh@21 -- # val= 00:10:18.054 05:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # IFS=: 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # read -r var val 00:10:18.054 05:29:22 -- accel/accel.sh@21 -- # val=software 00:10:18.054 05:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.054 05:29:22 -- accel/accel.sh@23 -- # accel_module=software 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # IFS=: 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # read -r var val 00:10:18.054 05:29:22 -- accel/accel.sh@21 -- # val=64 00:10:18.054 05:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # IFS=: 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # read -r var val 00:10:18.054 05:29:22 -- accel/accel.sh@21 -- # val=64 00:10:18.054 05:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # IFS=: 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # read -r var val 00:10:18.054 05:29:22 -- accel/accel.sh@21 -- # val=1 00:10:18.054 05:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # IFS=: 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # read -r var val 00:10:18.054 05:29:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:18.054 05:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # IFS=: 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # read -r var val 00:10:18.054 05:29:22 -- accel/accel.sh@21 -- # val=Yes 00:10:18.054 05:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # IFS=: 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # read -r var val 00:10:18.054 05:29:22 -- accel/accel.sh@21 -- # val= 00:10:18.054 05:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # IFS=: 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # read -r var val 00:10:18.054 05:29:22 -- accel/accel.sh@21 -- # val= 00:10:18.054 05:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # IFS=: 00:10:18.054 05:29:22 -- accel/accel.sh@20 -- # read -r var val 00:10:19.956 05:29:23 -- accel/accel.sh@21 -- # val= 00:10:19.956 05:29:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.956 05:29:23 -- accel/accel.sh@20 -- # IFS=: 00:10:19.956 05:29:23 -- accel/accel.sh@20 -- # read -r var val 00:10:19.956 05:29:23 -- accel/accel.sh@21 -- # val= 00:10:19.956 05:29:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.956 05:29:23 -- accel/accel.sh@20 -- # IFS=: 00:10:19.956 05:29:23 -- accel/accel.sh@20 -- # read -r var val 00:10:19.956 05:29:23 -- accel/accel.sh@21 -- # val= 00:10:19.956 05:29:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.956 05:29:23 -- accel/accel.sh@20 -- # IFS=: 00:10:19.956 05:29:23 -- accel/accel.sh@20 -- # read -r var val 00:10:19.956 05:29:23 -- accel/accel.sh@21 -- # val= 00:10:19.956 05:29:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.956 05:29:23 -- accel/accel.sh@20 -- # IFS=: 00:10:19.956 05:29:23 -- accel/accel.sh@20 -- # read -r var val 00:10:19.956 05:29:23 -- accel/accel.sh@21 -- # val= 00:10:19.956 05:29:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.956 05:29:23 -- accel/accel.sh@20 -- # IFS=: 00:10:19.956 05:29:23 -- accel/accel.sh@20 -- # read -r var val 00:10:19.956 05:29:23 -- accel/accel.sh@21 -- # val= 00:10:19.956 05:29:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.956 05:29:23 -- accel/accel.sh@20 -- # IFS=: 00:10:19.956 05:29:23 -- accel/accel.sh@20 -- # read -r var val 00:10:19.956 05:29:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:19.956 05:29:23 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:10:19.956 05:29:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:19.956 00:10:19.956 real 0m4.636s 00:10:19.956 user 0m4.145s 00:10:19.956 sys 0m0.313s 00:10:19.956 05:29:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:19.956 ************************************ 00:10:19.956 END TEST accel_fill 00:10:19.956 ************************************ 00:10:19.956 05:29:23 -- common/autotest_common.sh@10 -- # set +x 00:10:19.956 05:29:23 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:10:19.956 05:29:23 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:19.956 05:29:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:19.956 05:29:23 -- common/autotest_common.sh@10 -- # set +x 00:10:19.956 ************************************ 00:10:19.956 START TEST accel_copy_crc32c 00:10:19.956 ************************************ 00:10:19.956 05:29:23 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:10:19.956 05:29:23 -- accel/accel.sh@16 -- # local accel_opc 00:10:19.956 05:29:23 -- accel/accel.sh@17 -- # local accel_module 00:10:19.956 05:29:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:19.956 05:29:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:19.956 05:29:23 -- accel/accel.sh@12 -- # build_accel_config 00:10:19.956 05:29:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:19.956 05:29:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:19.956 05:29:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:19.956 05:29:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:19.956 05:29:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:19.956 05:29:23 -- accel/accel.sh@41 -- # local IFS=, 00:10:19.956 05:29:23 -- accel/accel.sh@42 -- # jq -r . 00:10:19.956 [2024-10-07 05:29:23.842832] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:19.956 [2024-10-07 05:29:23.843014] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109893 ] 00:10:20.215 [2024-10-07 05:29:24.011902] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.215 [2024-10-07 05:29:24.185395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.117 05:29:26 -- accel/accel.sh@18 -- # out=' 00:10:22.117 SPDK Configuration: 00:10:22.117 Core mask: 0x1 00:10:22.117 00:10:22.117 Accel Perf Configuration: 00:10:22.117 Workload Type: copy_crc32c 00:10:22.117 CRC-32C seed: 0 00:10:22.117 Vector size: 4096 bytes 00:10:22.117 Transfer size: 4096 bytes 00:10:22.117 Vector count 1 00:10:22.117 Module: software 00:10:22.117 Queue depth: 32 00:10:22.117 Allocate depth: 32 00:10:22.117 # threads/core: 1 00:10:22.117 Run time: 1 seconds 00:10:22.117 Verify: Yes 00:10:22.117 00:10:22.117 Running for 1 seconds... 00:10:22.117 00:10:22.117 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:22.117 ------------------------------------------------------------------------------------ 00:10:22.117 0,0 251904/s 984 MiB/s 0 0 00:10:22.117 ==================================================================================== 00:10:22.117 Total 251904/s 984 MiB/s 0 0' 00:10:22.117 05:29:26 -- accel/accel.sh@20 -- # IFS=: 00:10:22.117 05:29:26 -- accel/accel.sh@20 -- # read -r var val 00:10:22.117 05:29:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:22.117 05:29:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:22.117 05:29:26 -- accel/accel.sh@12 -- # build_accel_config 00:10:22.117 05:29:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:22.117 05:29:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:22.117 05:29:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:22.117 05:29:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:22.117 05:29:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:22.117 05:29:26 -- accel/accel.sh@41 -- # local IFS=, 00:10:22.117 05:29:26 -- accel/accel.sh@42 -- # jq -r . 00:10:22.375 [2024-10-07 05:29:26.126824] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:22.375 [2024-10-07 05:29:26.126996] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109995 ] 00:10:22.375 [2024-10-07 05:29:26.295047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.633 [2024-10-07 05:29:26.477460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.891 05:29:26 -- accel/accel.sh@21 -- # val= 00:10:22.891 05:29:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # IFS=: 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # read -r var val 00:10:22.891 05:29:26 -- accel/accel.sh@21 -- # val= 00:10:22.891 05:29:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # IFS=: 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # read -r var val 00:10:22.891 05:29:26 -- accel/accel.sh@21 -- # val=0x1 00:10:22.891 05:29:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # IFS=: 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # read -r var val 00:10:22.891 05:29:26 -- accel/accel.sh@21 -- # val= 00:10:22.891 05:29:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # IFS=: 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # read -r var val 00:10:22.891 05:29:26 -- accel/accel.sh@21 -- # val= 00:10:22.891 05:29:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # IFS=: 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # read -r var val 00:10:22.891 05:29:26 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:22.891 05:29:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.891 05:29:26 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # IFS=: 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # read -r var val 00:10:22.891 05:29:26 -- accel/accel.sh@21 -- # val=0 00:10:22.891 05:29:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # IFS=: 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # read -r var val 00:10:22.891 05:29:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:22.891 05:29:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # IFS=: 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # read -r var val 00:10:22.891 05:29:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:22.891 05:29:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # IFS=: 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # read -r var val 00:10:22.891 05:29:26 -- accel/accel.sh@21 -- # val= 00:10:22.891 05:29:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # IFS=: 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # read -r var val 00:10:22.891 05:29:26 -- accel/accel.sh@21 -- # val=software 00:10:22.891 05:29:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.891 05:29:26 -- accel/accel.sh@23 -- # accel_module=software 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # IFS=: 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # read -r var val 00:10:22.891 05:29:26 -- accel/accel.sh@21 -- # val=32 00:10:22.891 05:29:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # IFS=: 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # read -r var val 00:10:22.891 05:29:26 -- accel/accel.sh@21 -- # val=32 00:10:22.891 05:29:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # IFS=: 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # read -r var val 00:10:22.891 05:29:26 -- accel/accel.sh@21 -- # val=1 00:10:22.891 05:29:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # IFS=: 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # read -r var val 00:10:22.891 05:29:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:22.891 05:29:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # IFS=: 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # read -r var val 00:10:22.891 05:29:26 -- accel/accel.sh@21 -- # val=Yes 00:10:22.891 05:29:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # IFS=: 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # read -r var val 00:10:22.891 05:29:26 -- accel/accel.sh@21 -- # val= 00:10:22.891 05:29:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # IFS=: 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # read -r var val 00:10:22.891 05:29:26 -- accel/accel.sh@21 -- # val= 00:10:22.891 05:29:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # IFS=: 00:10:22.891 05:29:26 -- accel/accel.sh@20 -- # read -r var val 00:10:24.791 05:29:28 -- accel/accel.sh@21 -- # val= 00:10:24.791 05:29:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.791 05:29:28 -- accel/accel.sh@20 -- # IFS=: 00:10:24.791 05:29:28 -- accel/accel.sh@20 -- # read -r var val 00:10:24.791 05:29:28 -- accel/accel.sh@21 -- # val= 00:10:24.791 05:29:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.791 05:29:28 -- accel/accel.sh@20 -- # IFS=: 00:10:24.791 05:29:28 -- accel/accel.sh@20 -- # read -r var val 00:10:24.791 05:29:28 -- accel/accel.sh@21 -- # val= 00:10:24.791 05:29:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.791 05:29:28 -- accel/accel.sh@20 -- # IFS=: 00:10:24.791 05:29:28 -- accel/accel.sh@20 -- # read -r var val 00:10:24.791 05:29:28 -- accel/accel.sh@21 -- # val= 00:10:24.791 05:29:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.791 05:29:28 -- accel/accel.sh@20 -- # IFS=: 00:10:24.791 05:29:28 -- accel/accel.sh@20 -- # read -r var val 00:10:24.791 05:29:28 -- accel/accel.sh@21 -- # val= 00:10:24.791 05:29:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.791 05:29:28 -- accel/accel.sh@20 -- # IFS=: 00:10:24.791 05:29:28 -- accel/accel.sh@20 -- # read -r var val 00:10:24.791 05:29:28 -- accel/accel.sh@21 -- # val= 00:10:24.791 05:29:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.791 05:29:28 -- accel/accel.sh@20 -- # IFS=: 00:10:24.791 05:29:28 -- accel/accel.sh@20 -- # read -r var val 00:10:24.791 05:29:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:24.791 05:29:28 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:24.791 05:29:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:24.791 00:10:24.791 real 0m4.651s 00:10:24.791 user 0m4.130s 00:10:24.791 sys 0m0.320s 00:10:24.791 05:29:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:24.791 05:29:28 -- common/autotest_common.sh@10 -- # set +x 00:10:24.791 ************************************ 00:10:24.791 END TEST accel_copy_crc32c 00:10:24.791 ************************************ 00:10:24.791 05:29:28 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:10:24.791 05:29:28 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:24.791 05:29:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:24.791 05:29:28 -- common/autotest_common.sh@10 -- # set +x 00:10:24.791 ************************************ 00:10:24.791 START TEST accel_copy_crc32c_C2 00:10:24.791 ************************************ 00:10:24.791 05:29:28 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:10:24.791 05:29:28 -- accel/accel.sh@16 -- # local accel_opc 00:10:24.791 05:29:28 -- accel/accel.sh@17 -- # local accel_module 00:10:24.791 05:29:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:24.791 05:29:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:24.791 05:29:28 -- accel/accel.sh@12 -- # build_accel_config 00:10:24.791 05:29:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:24.791 05:29:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:24.791 05:29:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:24.791 05:29:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:24.791 05:29:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:24.791 05:29:28 -- accel/accel.sh@41 -- # local IFS=, 00:10:24.791 05:29:28 -- accel/accel.sh@42 -- # jq -r . 00:10:24.791 [2024-10-07 05:29:28.535798] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:24.791 [2024-10-07 05:29:28.535982] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110093 ] 00:10:24.791 [2024-10-07 05:29:28.691918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.050 [2024-10-07 05:29:28.862449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.952 05:29:30 -- accel/accel.sh@18 -- # out=' 00:10:26.952 SPDK Configuration: 00:10:26.952 Core mask: 0x1 00:10:26.952 00:10:26.952 Accel Perf Configuration: 00:10:26.952 Workload Type: copy_crc32c 00:10:26.952 CRC-32C seed: 0 00:10:26.952 Vector size: 4096 bytes 00:10:26.952 Transfer size: 8192 bytes 00:10:26.952 Vector count 2 00:10:26.952 Module: software 00:10:26.952 Queue depth: 32 00:10:26.952 Allocate depth: 32 00:10:26.952 # threads/core: 1 00:10:26.952 Run time: 1 seconds 00:10:26.952 Verify: Yes 00:10:26.952 00:10:26.952 Running for 1 seconds... 00:10:26.952 00:10:26.952 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:26.952 ------------------------------------------------------------------------------------ 00:10:26.952 0,0 173632/s 1356 MiB/s 0 0 00:10:26.952 ==================================================================================== 00:10:26.952 Total 173632/s 678 MiB/s 0 0' 00:10:26.952 05:29:30 -- accel/accel.sh@20 -- # IFS=: 00:10:26.952 05:29:30 -- accel/accel.sh@20 -- # read -r var val 00:10:26.952 05:29:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:26.952 05:29:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:26.952 05:29:30 -- accel/accel.sh@12 -- # build_accel_config 00:10:26.952 05:29:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:26.952 05:29:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:26.952 05:29:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:26.952 05:29:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:26.952 05:29:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:26.952 05:29:30 -- accel/accel.sh@41 -- # local IFS=, 00:10:26.952 05:29:30 -- accel/accel.sh@42 -- # jq -r . 00:10:26.952 [2024-10-07 05:29:30.799135] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:26.952 [2024-10-07 05:29:30.799350] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110221 ] 00:10:27.211 [2024-10-07 05:29:30.967371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.211 [2024-10-07 05:29:31.152443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.469 05:29:31 -- accel/accel.sh@21 -- # val= 00:10:27.469 05:29:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.469 05:29:31 -- accel/accel.sh@20 -- # IFS=: 00:10:27.469 05:29:31 -- accel/accel.sh@20 -- # read -r var val 00:10:27.469 05:29:31 -- accel/accel.sh@21 -- # val= 00:10:27.469 05:29:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.469 05:29:31 -- accel/accel.sh@20 -- # IFS=: 00:10:27.469 05:29:31 -- accel/accel.sh@20 -- # read -r var val 00:10:27.469 05:29:31 -- accel/accel.sh@21 -- # val=0x1 00:10:27.469 05:29:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.469 05:29:31 -- accel/accel.sh@20 -- # IFS=: 00:10:27.469 05:29:31 -- accel/accel.sh@20 -- # read -r var val 00:10:27.469 05:29:31 -- accel/accel.sh@21 -- # val= 00:10:27.469 05:29:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.469 05:29:31 -- accel/accel.sh@20 -- # IFS=: 00:10:27.469 05:29:31 -- accel/accel.sh@20 -- # read -r var val 00:10:27.469 05:29:31 -- accel/accel.sh@21 -- # val= 00:10:27.469 05:29:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.469 05:29:31 -- accel/accel.sh@20 -- # IFS=: 00:10:27.469 05:29:31 -- accel/accel.sh@20 -- # read -r var val 00:10:27.469 05:29:31 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:27.469 05:29:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.469 05:29:31 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:27.469 05:29:31 -- accel/accel.sh@20 -- # IFS=: 00:10:27.469 05:29:31 -- accel/accel.sh@20 -- # read -r var val 00:10:27.469 05:29:31 -- accel/accel.sh@21 -- # val=0 00:10:27.470 05:29:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.470 05:29:31 -- accel/accel.sh@20 -- # IFS=: 00:10:27.470 05:29:31 -- accel/accel.sh@20 -- # read -r var val 00:10:27.470 05:29:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:27.470 05:29:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.470 05:29:31 -- accel/accel.sh@20 -- # IFS=: 00:10:27.470 05:29:31 -- accel/accel.sh@20 -- # read -r var val 00:10:27.470 05:29:31 -- accel/accel.sh@21 -- # val='8192 bytes' 00:10:27.470 05:29:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.470 05:29:31 -- accel/accel.sh@20 -- # IFS=: 00:10:27.470 05:29:31 -- accel/accel.sh@20 -- # read -r var val 00:10:27.470 05:29:31 -- accel/accel.sh@21 -- # val= 00:10:27.470 05:29:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.470 05:29:31 -- accel/accel.sh@20 -- # IFS=: 00:10:27.470 05:29:31 -- accel/accel.sh@20 -- # read -r var val 00:10:27.470 05:29:31 -- accel/accel.sh@21 -- # val=software 00:10:27.470 05:29:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.470 05:29:31 -- accel/accel.sh@23 -- # accel_module=software 00:10:27.470 05:29:31 -- accel/accel.sh@20 -- # IFS=: 00:10:27.470 05:29:31 -- accel/accel.sh@20 -- # read -r var val 00:10:27.470 05:29:31 -- accel/accel.sh@21 -- # val=32 00:10:27.470 05:29:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.470 05:29:31 -- accel/accel.sh@20 -- # IFS=: 00:10:27.470 05:29:31 -- accel/accel.sh@20 -- # read -r var val 00:10:27.470 05:29:31 -- accel/accel.sh@21 -- # val=32 00:10:27.470 05:29:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.470 05:29:31 -- accel/accel.sh@20 -- # IFS=: 00:10:27.470 05:29:31 -- accel/accel.sh@20 -- # read -r var val 00:10:27.470 05:29:31 -- accel/accel.sh@21 -- # val=1 00:10:27.470 05:29:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.470 05:29:31 -- accel/accel.sh@20 -- # IFS=: 00:10:27.470 05:29:31 -- accel/accel.sh@20 -- # read -r var val 00:10:27.470 05:29:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:27.470 05:29:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.470 05:29:31 -- accel/accel.sh@20 -- # IFS=: 00:10:27.470 05:29:31 -- accel/accel.sh@20 -- # read -r var val 00:10:27.470 05:29:31 -- accel/accel.sh@21 -- # val=Yes 00:10:27.470 05:29:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.470 05:29:31 -- accel/accel.sh@20 -- # IFS=: 00:10:27.470 05:29:31 -- accel/accel.sh@20 -- # read -r var val 00:10:27.470 05:29:31 -- accel/accel.sh@21 -- # val= 00:10:27.470 05:29:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.470 05:29:31 -- accel/accel.sh@20 -- # IFS=: 00:10:27.470 05:29:31 -- accel/accel.sh@20 -- # read -r var val 00:10:27.470 05:29:31 -- accel/accel.sh@21 -- # val= 00:10:27.470 05:29:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.470 05:29:31 -- accel/accel.sh@20 -- # IFS=: 00:10:27.470 05:29:31 -- accel/accel.sh@20 -- # read -r var val 00:10:29.387 05:29:33 -- accel/accel.sh@21 -- # val= 00:10:29.387 05:29:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.387 05:29:33 -- accel/accel.sh@20 -- # IFS=: 00:10:29.387 05:29:33 -- accel/accel.sh@20 -- # read -r var val 00:10:29.387 05:29:33 -- accel/accel.sh@21 -- # val= 00:10:29.387 05:29:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.387 05:29:33 -- accel/accel.sh@20 -- # IFS=: 00:10:29.387 05:29:33 -- accel/accel.sh@20 -- # read -r var val 00:10:29.387 05:29:33 -- accel/accel.sh@21 -- # val= 00:10:29.387 05:29:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.387 05:29:33 -- accel/accel.sh@20 -- # IFS=: 00:10:29.387 05:29:33 -- accel/accel.sh@20 -- # read -r var val 00:10:29.387 05:29:33 -- accel/accel.sh@21 -- # val= 00:10:29.387 05:29:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.387 05:29:33 -- accel/accel.sh@20 -- # IFS=: 00:10:29.387 05:29:33 -- accel/accel.sh@20 -- # read -r var val 00:10:29.387 05:29:33 -- accel/accel.sh@21 -- # val= 00:10:29.387 05:29:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.387 05:29:33 -- accel/accel.sh@20 -- # IFS=: 00:10:29.387 05:29:33 -- accel/accel.sh@20 -- # read -r var val 00:10:29.387 05:29:33 -- accel/accel.sh@21 -- # val= 00:10:29.387 05:29:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.387 05:29:33 -- accel/accel.sh@20 -- # IFS=: 00:10:29.387 05:29:33 -- accel/accel.sh@20 -- # read -r var val 00:10:29.387 05:29:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:29.387 05:29:33 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:29.387 05:29:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:29.387 00:10:29.387 real 0m4.588s 00:10:29.387 user 0m4.027s 00:10:29.387 sys 0m0.382s 00:10:29.387 05:29:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:29.387 05:29:33 -- common/autotest_common.sh@10 -- # set +x 00:10:29.387 ************************************ 00:10:29.387 END TEST accel_copy_crc32c_C2 00:10:29.387 ************************************ 00:10:29.387 05:29:33 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:10:29.387 05:29:33 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:29.387 05:29:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:29.387 05:29:33 -- common/autotest_common.sh@10 -- # set +x 00:10:29.387 ************************************ 00:10:29.387 START TEST accel_dualcast 00:10:29.387 ************************************ 00:10:29.387 05:29:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:10:29.387 05:29:33 -- accel/accel.sh@16 -- # local accel_opc 00:10:29.387 05:29:33 -- accel/accel.sh@17 -- # local accel_module 00:10:29.387 05:29:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:10:29.387 05:29:33 -- accel/accel.sh@12 -- # build_accel_config 00:10:29.387 05:29:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:29.387 05:29:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:29.387 05:29:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:29.387 05:29:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:29.387 05:29:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:29.387 05:29:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:29.387 05:29:33 -- accel/accel.sh@41 -- # local IFS=, 00:10:29.387 05:29:33 -- accel/accel.sh@42 -- # jq -r . 00:10:29.387 [2024-10-07 05:29:33.176798] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:29.387 [2024-10-07 05:29:33.176992] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110308 ] 00:10:29.387 [2024-10-07 05:29:33.343357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.646 [2024-10-07 05:29:33.514550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.548 05:29:35 -- accel/accel.sh@18 -- # out=' 00:10:31.548 SPDK Configuration: 00:10:31.548 Core mask: 0x1 00:10:31.548 00:10:31.548 Accel Perf Configuration: 00:10:31.548 Workload Type: dualcast 00:10:31.548 Transfer size: 4096 bytes 00:10:31.548 Vector count 1 00:10:31.548 Module: software 00:10:31.548 Queue depth: 32 00:10:31.548 Allocate depth: 32 00:10:31.548 # threads/core: 1 00:10:31.548 Run time: 1 seconds 00:10:31.548 Verify: Yes 00:10:31.548 00:10:31.548 Running for 1 seconds... 00:10:31.548 00:10:31.548 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:31.548 ------------------------------------------------------------------------------------ 00:10:31.548 0,0 316736/s 1237 MiB/s 0 0 00:10:31.548 ==================================================================================== 00:10:31.548 Total 316736/s 1237 MiB/s 0 0' 00:10:31.548 05:29:35 -- accel/accel.sh@20 -- # IFS=: 00:10:31.548 05:29:35 -- accel/accel.sh@20 -- # read -r var val 00:10:31.548 05:29:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:10:31.548 05:29:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:31.548 05:29:35 -- accel/accel.sh@12 -- # build_accel_config 00:10:31.548 05:29:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:31.548 05:29:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:31.548 05:29:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:31.548 05:29:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:31.548 05:29:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:31.548 05:29:35 -- accel/accel.sh@41 -- # local IFS=, 00:10:31.548 05:29:35 -- accel/accel.sh@42 -- # jq -r . 00:10:31.548 [2024-10-07 05:29:35.441708] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:31.548 [2024-10-07 05:29:35.441895] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110394 ] 00:10:31.807 [2024-10-07 05:29:35.609624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.066 [2024-10-07 05:29:35.810562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.066 05:29:35 -- accel/accel.sh@21 -- # val= 00:10:32.066 05:29:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.066 05:29:35 -- accel/accel.sh@20 -- # IFS=: 00:10:32.066 05:29:35 -- accel/accel.sh@20 -- # read -r var val 00:10:32.066 05:29:35 -- accel/accel.sh@21 -- # val= 00:10:32.066 05:29:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.066 05:29:35 -- accel/accel.sh@20 -- # IFS=: 00:10:32.066 05:29:35 -- accel/accel.sh@20 -- # read -r var val 00:10:32.066 05:29:35 -- accel/accel.sh@21 -- # val=0x1 00:10:32.066 05:29:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.066 05:29:35 -- accel/accel.sh@20 -- # IFS=: 00:10:32.066 05:29:35 -- accel/accel.sh@20 -- # read -r var val 00:10:32.066 05:29:35 -- accel/accel.sh@21 -- # val= 00:10:32.066 05:29:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.066 05:29:35 -- accel/accel.sh@20 -- # IFS=: 00:10:32.066 05:29:35 -- accel/accel.sh@20 -- # read -r var val 00:10:32.066 05:29:35 -- accel/accel.sh@21 -- # val= 00:10:32.066 05:29:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.066 05:29:35 -- accel/accel.sh@20 -- # IFS=: 00:10:32.066 05:29:35 -- accel/accel.sh@20 -- # read -r var val 00:10:32.066 05:29:35 -- accel/accel.sh@21 -- # val=dualcast 00:10:32.066 05:29:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.066 05:29:35 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:10:32.066 05:29:35 -- accel/accel.sh@20 -- # IFS=: 00:10:32.066 05:29:35 -- accel/accel.sh@20 -- # read -r var val 00:10:32.066 05:29:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:32.066 05:29:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.066 05:29:35 -- accel/accel.sh@20 -- # IFS=: 00:10:32.066 05:29:35 -- accel/accel.sh@20 -- # read -r var val 00:10:32.066 05:29:35 -- accel/accel.sh@21 -- # val= 00:10:32.066 05:29:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.066 05:29:35 -- accel/accel.sh@20 -- # IFS=: 00:10:32.066 05:29:35 -- accel/accel.sh@20 -- # read -r var val 00:10:32.066 05:29:35 -- accel/accel.sh@21 -- # val=software 00:10:32.066 05:29:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.066 05:29:35 -- accel/accel.sh@23 -- # accel_module=software 00:10:32.066 05:29:35 -- accel/accel.sh@20 -- # IFS=: 00:10:32.066 05:29:35 -- accel/accel.sh@20 -- # read -r var val 00:10:32.066 05:29:35 -- accel/accel.sh@21 -- # val=32 00:10:32.066 05:29:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.066 05:29:35 -- accel/accel.sh@20 -- # IFS=: 00:10:32.066 05:29:35 -- accel/accel.sh@20 -- # read -r var val 00:10:32.066 05:29:35 -- accel/accel.sh@21 -- # val=32 00:10:32.066 05:29:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.066 05:29:35 -- accel/accel.sh@20 -- # IFS=: 00:10:32.066 05:29:35 -- accel/accel.sh@20 -- # read -r var val 00:10:32.066 05:29:35 -- accel/accel.sh@21 -- # val=1 00:10:32.066 05:29:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.066 05:29:35 -- accel/accel.sh@20 -- # IFS=: 00:10:32.066 05:29:36 -- accel/accel.sh@20 -- # read -r var val 00:10:32.066 05:29:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:32.066 05:29:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.066 05:29:36 -- accel/accel.sh@20 -- # IFS=: 00:10:32.066 05:29:36 -- accel/accel.sh@20 -- # read -r var val 00:10:32.066 05:29:36 -- accel/accel.sh@21 -- # val=Yes 00:10:32.066 05:29:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.066 05:29:36 -- accel/accel.sh@20 -- # IFS=: 00:10:32.066 05:29:36 -- accel/accel.sh@20 -- # read -r var val 00:10:32.066 05:29:36 -- accel/accel.sh@21 -- # val= 00:10:32.066 05:29:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.066 05:29:36 -- accel/accel.sh@20 -- # IFS=: 00:10:32.066 05:29:36 -- accel/accel.sh@20 -- # read -r var val 00:10:32.066 05:29:36 -- accel/accel.sh@21 -- # val= 00:10:32.066 05:29:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.066 05:29:36 -- accel/accel.sh@20 -- # IFS=: 00:10:32.066 05:29:36 -- accel/accel.sh@20 -- # read -r var val 00:10:33.971 05:29:37 -- accel/accel.sh@21 -- # val= 00:10:33.971 05:29:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.971 05:29:37 -- accel/accel.sh@20 -- # IFS=: 00:10:33.971 05:29:37 -- accel/accel.sh@20 -- # read -r var val 00:10:33.971 05:29:37 -- accel/accel.sh@21 -- # val= 00:10:33.971 05:29:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.971 05:29:37 -- accel/accel.sh@20 -- # IFS=: 00:10:33.971 05:29:37 -- accel/accel.sh@20 -- # read -r var val 00:10:33.971 05:29:37 -- accel/accel.sh@21 -- # val= 00:10:33.971 05:29:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.971 05:29:37 -- accel/accel.sh@20 -- # IFS=: 00:10:33.971 05:29:37 -- accel/accel.sh@20 -- # read -r var val 00:10:33.971 05:29:37 -- accel/accel.sh@21 -- # val= 00:10:33.971 05:29:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.971 05:29:37 -- accel/accel.sh@20 -- # IFS=: 00:10:33.971 05:29:37 -- accel/accel.sh@20 -- # read -r var val 00:10:33.971 05:29:37 -- accel/accel.sh@21 -- # val= 00:10:33.971 05:29:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.971 05:29:37 -- accel/accel.sh@20 -- # IFS=: 00:10:33.971 05:29:37 -- accel/accel.sh@20 -- # read -r var val 00:10:33.971 05:29:37 -- accel/accel.sh@21 -- # val= 00:10:33.971 05:29:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.971 05:29:37 -- accel/accel.sh@20 -- # IFS=: 00:10:33.971 05:29:37 -- accel/accel.sh@20 -- # read -r var val 00:10:33.971 05:29:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:33.971 05:29:37 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:10:33.971 05:29:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:33.971 00:10:33.971 real 0m4.621s 00:10:33.971 user 0m4.080s 00:10:33.971 sys 0m0.363s 00:10:33.971 05:29:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:33.971 ************************************ 00:10:33.971 END TEST accel_dualcast 00:10:33.971 ************************************ 00:10:33.971 05:29:37 -- common/autotest_common.sh@10 -- # set +x 00:10:33.971 05:29:37 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:10:33.971 05:29:37 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:33.971 05:29:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:33.971 05:29:37 -- common/autotest_common.sh@10 -- # set +x 00:10:33.971 ************************************ 00:10:33.971 START TEST accel_compare 00:10:33.971 ************************************ 00:10:33.971 05:29:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:10:33.971 05:29:37 -- accel/accel.sh@16 -- # local accel_opc 00:10:33.971 05:29:37 -- accel/accel.sh@17 -- # local accel_module 00:10:33.971 05:29:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:10:33.971 05:29:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:33.971 05:29:37 -- accel/accel.sh@12 -- # build_accel_config 00:10:33.971 05:29:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:33.971 05:29:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:33.971 05:29:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:33.971 05:29:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:33.971 05:29:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:33.971 05:29:37 -- accel/accel.sh@41 -- # local IFS=, 00:10:33.971 05:29:37 -- accel/accel.sh@42 -- # jq -r . 00:10:33.971 [2024-10-07 05:29:37.846692] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:33.971 [2024-10-07 05:29:37.846877] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110571 ] 00:10:34.230 [2024-10-07 05:29:38.012053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.230 [2024-10-07 05:29:38.199720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.131 05:29:40 -- accel/accel.sh@18 -- # out=' 00:10:36.131 SPDK Configuration: 00:10:36.131 Core mask: 0x1 00:10:36.131 00:10:36.131 Accel Perf Configuration: 00:10:36.131 Workload Type: compare 00:10:36.131 Transfer size: 4096 bytes 00:10:36.131 Vector count 1 00:10:36.131 Module: software 00:10:36.131 Queue depth: 32 00:10:36.131 Allocate depth: 32 00:10:36.131 # threads/core: 1 00:10:36.131 Run time: 1 seconds 00:10:36.131 Verify: Yes 00:10:36.131 00:10:36.131 Running for 1 seconds... 00:10:36.131 00:10:36.131 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:36.131 ------------------------------------------------------------------------------------ 00:10:36.131 0,0 463168/s 1809 MiB/s 0 0 00:10:36.131 ==================================================================================== 00:10:36.131 Total 463168/s 1809 MiB/s 0 0' 00:10:36.131 05:29:40 -- accel/accel.sh@20 -- # IFS=: 00:10:36.131 05:29:40 -- accel/accel.sh@20 -- # read -r var val 00:10:36.131 05:29:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:10:36.131 05:29:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:36.131 05:29:40 -- accel/accel.sh@12 -- # build_accel_config 00:10:36.131 05:29:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:36.131 05:29:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:36.131 05:29:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:36.131 05:29:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:36.131 05:29:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:36.131 05:29:40 -- accel/accel.sh@41 -- # local IFS=, 00:10:36.131 05:29:40 -- accel/accel.sh@42 -- # jq -r . 00:10:36.391 [2024-10-07 05:29:40.131062] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:36.391 [2024-10-07 05:29:40.131250] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110642 ] 00:10:36.391 [2024-10-07 05:29:40.299502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.649 [2024-10-07 05:29:40.497652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.909 05:29:40 -- accel/accel.sh@21 -- # val= 00:10:36.909 05:29:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # IFS=: 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # read -r var val 00:10:36.909 05:29:40 -- accel/accel.sh@21 -- # val= 00:10:36.909 05:29:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # IFS=: 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # read -r var val 00:10:36.909 05:29:40 -- accel/accel.sh@21 -- # val=0x1 00:10:36.909 05:29:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # IFS=: 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # read -r var val 00:10:36.909 05:29:40 -- accel/accel.sh@21 -- # val= 00:10:36.909 05:29:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # IFS=: 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # read -r var val 00:10:36.909 05:29:40 -- accel/accel.sh@21 -- # val= 00:10:36.909 05:29:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # IFS=: 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # read -r var val 00:10:36.909 05:29:40 -- accel/accel.sh@21 -- # val=compare 00:10:36.909 05:29:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.909 05:29:40 -- accel/accel.sh@24 -- # accel_opc=compare 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # IFS=: 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # read -r var val 00:10:36.909 05:29:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:36.909 05:29:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # IFS=: 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # read -r var val 00:10:36.909 05:29:40 -- accel/accel.sh@21 -- # val= 00:10:36.909 05:29:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # IFS=: 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # read -r var val 00:10:36.909 05:29:40 -- accel/accel.sh@21 -- # val=software 00:10:36.909 05:29:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.909 05:29:40 -- accel/accel.sh@23 -- # accel_module=software 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # IFS=: 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # read -r var val 00:10:36.909 05:29:40 -- accel/accel.sh@21 -- # val=32 00:10:36.909 05:29:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # IFS=: 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # read -r var val 00:10:36.909 05:29:40 -- accel/accel.sh@21 -- # val=32 00:10:36.909 05:29:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # IFS=: 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # read -r var val 00:10:36.909 05:29:40 -- accel/accel.sh@21 -- # val=1 00:10:36.909 05:29:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # IFS=: 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # read -r var val 00:10:36.909 05:29:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:36.909 05:29:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # IFS=: 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # read -r var val 00:10:36.909 05:29:40 -- accel/accel.sh@21 -- # val=Yes 00:10:36.909 05:29:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # IFS=: 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # read -r var val 00:10:36.909 05:29:40 -- accel/accel.sh@21 -- # val= 00:10:36.909 05:29:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # IFS=: 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # read -r var val 00:10:36.909 05:29:40 -- accel/accel.sh@21 -- # val= 00:10:36.909 05:29:40 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # IFS=: 00:10:36.909 05:29:40 -- accel/accel.sh@20 -- # read -r var val 00:10:38.813 05:29:42 -- accel/accel.sh@21 -- # val= 00:10:38.813 05:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.813 05:29:42 -- accel/accel.sh@20 -- # IFS=: 00:10:38.813 05:29:42 -- accel/accel.sh@20 -- # read -r var val 00:10:38.813 05:29:42 -- accel/accel.sh@21 -- # val= 00:10:38.813 05:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.813 05:29:42 -- accel/accel.sh@20 -- # IFS=: 00:10:38.813 05:29:42 -- accel/accel.sh@20 -- # read -r var val 00:10:38.813 05:29:42 -- accel/accel.sh@21 -- # val= 00:10:38.813 05:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.813 05:29:42 -- accel/accel.sh@20 -- # IFS=: 00:10:38.813 05:29:42 -- accel/accel.sh@20 -- # read -r var val 00:10:38.813 05:29:42 -- accel/accel.sh@21 -- # val= 00:10:38.813 05:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.813 05:29:42 -- accel/accel.sh@20 -- # IFS=: 00:10:38.813 05:29:42 -- accel/accel.sh@20 -- # read -r var val 00:10:38.813 05:29:42 -- accel/accel.sh@21 -- # val= 00:10:38.813 05:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.813 05:29:42 -- accel/accel.sh@20 -- # IFS=: 00:10:38.813 05:29:42 -- accel/accel.sh@20 -- # read -r var val 00:10:38.813 05:29:42 -- accel/accel.sh@21 -- # val= 00:10:38.813 05:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.813 05:29:42 -- accel/accel.sh@20 -- # IFS=: 00:10:38.813 05:29:42 -- accel/accel.sh@20 -- # read -r var val 00:10:38.813 ************************************ 00:10:38.813 END TEST accel_compare 00:10:38.813 ************************************ 00:10:38.813 05:29:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:38.813 05:29:42 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:10:38.813 05:29:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:38.813 00:10:38.813 real 0m4.591s 00:10:38.813 user 0m4.094s 00:10:38.813 sys 0m0.320s 00:10:38.813 05:29:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:38.813 05:29:42 -- common/autotest_common.sh@10 -- # set +x 00:10:38.813 05:29:42 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:10:38.813 05:29:42 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:38.813 05:29:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:38.813 05:29:42 -- common/autotest_common.sh@10 -- # set +x 00:10:38.813 ************************************ 00:10:38.813 START TEST accel_xor 00:10:38.813 ************************************ 00:10:38.813 05:29:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:10:38.813 05:29:42 -- accel/accel.sh@16 -- # local accel_opc 00:10:38.813 05:29:42 -- accel/accel.sh@17 -- # local accel_module 00:10:38.813 05:29:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:10:38.813 05:29:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:38.813 05:29:42 -- accel/accel.sh@12 -- # build_accel_config 00:10:38.813 05:29:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:38.813 05:29:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:38.813 05:29:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:38.813 05:29:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:38.813 05:29:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:38.813 05:29:42 -- accel/accel.sh@41 -- # local IFS=, 00:10:38.813 05:29:42 -- accel/accel.sh@42 -- # jq -r . 00:10:38.813 [2024-10-07 05:29:42.495448] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:38.813 [2024-10-07 05:29:42.495783] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110746 ] 00:10:38.813 [2024-10-07 05:29:42.660957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.072 [2024-10-07 05:29:42.839528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.986 05:29:44 -- accel/accel.sh@18 -- # out=' 00:10:40.986 SPDK Configuration: 00:10:40.986 Core mask: 0x1 00:10:40.986 00:10:40.986 Accel Perf Configuration: 00:10:40.986 Workload Type: xor 00:10:40.986 Source buffers: 2 00:10:40.986 Transfer size: 4096 bytes 00:10:40.986 Vector count 1 00:10:40.986 Module: software 00:10:40.986 Queue depth: 32 00:10:40.986 Allocate depth: 32 00:10:40.986 # threads/core: 1 00:10:40.986 Run time: 1 seconds 00:10:40.986 Verify: Yes 00:10:40.986 00:10:40.986 Running for 1 seconds... 00:10:40.986 00:10:40.986 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:40.986 ------------------------------------------------------------------------------------ 00:10:40.986 0,0 240480/s 939 MiB/s 0 0 00:10:40.986 ==================================================================================== 00:10:40.986 Total 240480/s 939 MiB/s 0 0' 00:10:40.986 05:29:44 -- accel/accel.sh@20 -- # IFS=: 00:10:40.986 05:29:44 -- accel/accel.sh@20 -- # read -r var val 00:10:40.986 05:29:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:10:40.986 05:29:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:40.986 05:29:44 -- accel/accel.sh@12 -- # build_accel_config 00:10:40.986 05:29:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:40.986 05:29:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:40.986 05:29:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:40.986 05:29:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:40.986 05:29:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:40.986 05:29:44 -- accel/accel.sh@41 -- # local IFS=, 00:10:40.986 05:29:44 -- accel/accel.sh@42 -- # jq -r . 00:10:40.986 [2024-10-07 05:29:44.769615] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:40.986 [2024-10-07 05:29:44.769896] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110881 ] 00:10:40.986 [2024-10-07 05:29:44.921929] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.257 [2024-10-07 05:29:45.135983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.516 05:29:45 -- accel/accel.sh@21 -- # val= 00:10:41.516 05:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # IFS=: 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # read -r var val 00:10:41.516 05:29:45 -- accel/accel.sh@21 -- # val= 00:10:41.516 05:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # IFS=: 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # read -r var val 00:10:41.516 05:29:45 -- accel/accel.sh@21 -- # val=0x1 00:10:41.516 05:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # IFS=: 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # read -r var val 00:10:41.516 05:29:45 -- accel/accel.sh@21 -- # val= 00:10:41.516 05:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # IFS=: 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # read -r var val 00:10:41.516 05:29:45 -- accel/accel.sh@21 -- # val= 00:10:41.516 05:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # IFS=: 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # read -r var val 00:10:41.516 05:29:45 -- accel/accel.sh@21 -- # val=xor 00:10:41.516 05:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.516 05:29:45 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # IFS=: 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # read -r var val 00:10:41.516 05:29:45 -- accel/accel.sh@21 -- # val=2 00:10:41.516 05:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # IFS=: 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # read -r var val 00:10:41.516 05:29:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:41.516 05:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # IFS=: 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # read -r var val 00:10:41.516 05:29:45 -- accel/accel.sh@21 -- # val= 00:10:41.516 05:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # IFS=: 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # read -r var val 00:10:41.516 05:29:45 -- accel/accel.sh@21 -- # val=software 00:10:41.516 05:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.516 05:29:45 -- accel/accel.sh@23 -- # accel_module=software 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # IFS=: 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # read -r var val 00:10:41.516 05:29:45 -- accel/accel.sh@21 -- # val=32 00:10:41.516 05:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # IFS=: 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # read -r var val 00:10:41.516 05:29:45 -- accel/accel.sh@21 -- # val=32 00:10:41.516 05:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # IFS=: 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # read -r var val 00:10:41.516 05:29:45 -- accel/accel.sh@21 -- # val=1 00:10:41.516 05:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # IFS=: 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # read -r var val 00:10:41.516 05:29:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:41.516 05:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # IFS=: 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # read -r var val 00:10:41.516 05:29:45 -- accel/accel.sh@21 -- # val=Yes 00:10:41.516 05:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # IFS=: 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # read -r var val 00:10:41.516 05:29:45 -- accel/accel.sh@21 -- # val= 00:10:41.516 05:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # IFS=: 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # read -r var val 00:10:41.516 05:29:45 -- accel/accel.sh@21 -- # val= 00:10:41.516 05:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # IFS=: 00:10:41.516 05:29:45 -- accel/accel.sh@20 -- # read -r var val 00:10:43.418 05:29:47 -- accel/accel.sh@21 -- # val= 00:10:43.418 05:29:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.418 05:29:47 -- accel/accel.sh@20 -- # IFS=: 00:10:43.418 05:29:47 -- accel/accel.sh@20 -- # read -r var val 00:10:43.418 05:29:47 -- accel/accel.sh@21 -- # val= 00:10:43.418 05:29:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.418 05:29:47 -- accel/accel.sh@20 -- # IFS=: 00:10:43.418 05:29:47 -- accel/accel.sh@20 -- # read -r var val 00:10:43.418 05:29:47 -- accel/accel.sh@21 -- # val= 00:10:43.418 05:29:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.418 05:29:47 -- accel/accel.sh@20 -- # IFS=: 00:10:43.418 05:29:47 -- accel/accel.sh@20 -- # read -r var val 00:10:43.418 05:29:47 -- accel/accel.sh@21 -- # val= 00:10:43.418 05:29:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.418 05:29:47 -- accel/accel.sh@20 -- # IFS=: 00:10:43.418 05:29:47 -- accel/accel.sh@20 -- # read -r var val 00:10:43.418 05:29:47 -- accel/accel.sh@21 -- # val= 00:10:43.418 05:29:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.418 05:29:47 -- accel/accel.sh@20 -- # IFS=: 00:10:43.418 05:29:47 -- accel/accel.sh@20 -- # read -r var val 00:10:43.418 05:29:47 -- accel/accel.sh@21 -- # val= 00:10:43.418 05:29:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.418 05:29:47 -- accel/accel.sh@20 -- # IFS=: 00:10:43.418 05:29:47 -- accel/accel.sh@20 -- # read -r var val 00:10:43.418 ************************************ 00:10:43.418 END TEST accel_xor 00:10:43.418 ************************************ 00:10:43.418 05:29:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:43.418 05:29:47 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:10:43.418 05:29:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:43.418 00:10:43.418 real 0m4.615s 00:10:43.418 user 0m4.095s 00:10:43.418 sys 0m0.329s 00:10:43.418 05:29:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:43.418 05:29:47 -- common/autotest_common.sh@10 -- # set +x 00:10:43.418 05:29:47 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:10:43.418 05:29:47 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:43.418 05:29:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:43.418 05:29:47 -- common/autotest_common.sh@10 -- # set +x 00:10:43.418 ************************************ 00:10:43.418 START TEST accel_xor 00:10:43.418 ************************************ 00:10:43.418 05:29:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:10:43.418 05:29:47 -- accel/accel.sh@16 -- # local accel_opc 00:10:43.418 05:29:47 -- accel/accel.sh@17 -- # local accel_module 00:10:43.418 05:29:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:10:43.418 05:29:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:43.418 05:29:47 -- accel/accel.sh@12 -- # build_accel_config 00:10:43.418 05:29:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:43.418 05:29:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:43.418 05:29:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:43.418 05:29:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:43.418 05:29:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:43.418 05:29:47 -- accel/accel.sh@41 -- # local IFS=, 00:10:43.418 05:29:47 -- accel/accel.sh@42 -- # jq -r . 00:10:43.418 [2024-10-07 05:29:47.148958] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:43.418 [2024-10-07 05:29:47.149124] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110995 ] 00:10:43.418 [2024-10-07 05:29:47.303742] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.677 [2024-10-07 05:29:47.477991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.577 05:29:49 -- accel/accel.sh@18 -- # out=' 00:10:45.577 SPDK Configuration: 00:10:45.577 Core mask: 0x1 00:10:45.577 00:10:45.577 Accel Perf Configuration: 00:10:45.577 Workload Type: xor 00:10:45.577 Source buffers: 3 00:10:45.577 Transfer size: 4096 bytes 00:10:45.577 Vector count 1 00:10:45.577 Module: software 00:10:45.577 Queue depth: 32 00:10:45.577 Allocate depth: 32 00:10:45.577 # threads/core: 1 00:10:45.577 Run time: 1 seconds 00:10:45.577 Verify: Yes 00:10:45.577 00:10:45.577 Running for 1 seconds... 00:10:45.577 00:10:45.577 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:45.577 ------------------------------------------------------------------------------------ 00:10:45.577 0,0 235808/s 921 MiB/s 0 0 00:10:45.577 ==================================================================================== 00:10:45.577 Total 235808/s 921 MiB/s 0 0' 00:10:45.577 05:29:49 -- accel/accel.sh@20 -- # IFS=: 00:10:45.577 05:29:49 -- accel/accel.sh@20 -- # read -r var val 00:10:45.577 05:29:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:10:45.577 05:29:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:45.577 05:29:49 -- accel/accel.sh@12 -- # build_accel_config 00:10:45.577 05:29:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:45.577 05:29:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:45.577 05:29:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:45.577 05:29:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:45.577 05:29:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:45.577 05:29:49 -- accel/accel.sh@41 -- # local IFS=, 00:10:45.577 05:29:49 -- accel/accel.sh@42 -- # jq -r . 00:10:45.577 [2024-10-07 05:29:49.427280] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:45.578 [2024-10-07 05:29:49.427498] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111097 ] 00:10:45.836 [2024-10-07 05:29:49.595131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.836 [2024-10-07 05:29:49.771218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.094 05:29:49 -- accel/accel.sh@21 -- # val= 00:10:46.094 05:29:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # IFS=: 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # read -r var val 00:10:46.094 05:29:49 -- accel/accel.sh@21 -- # val= 00:10:46.094 05:29:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # IFS=: 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # read -r var val 00:10:46.094 05:29:49 -- accel/accel.sh@21 -- # val=0x1 00:10:46.094 05:29:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # IFS=: 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # read -r var val 00:10:46.094 05:29:49 -- accel/accel.sh@21 -- # val= 00:10:46.094 05:29:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # IFS=: 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # read -r var val 00:10:46.094 05:29:49 -- accel/accel.sh@21 -- # val= 00:10:46.094 05:29:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # IFS=: 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # read -r var val 00:10:46.094 05:29:49 -- accel/accel.sh@21 -- # val=xor 00:10:46.094 05:29:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.094 05:29:49 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # IFS=: 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # read -r var val 00:10:46.094 05:29:49 -- accel/accel.sh@21 -- # val=3 00:10:46.094 05:29:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # IFS=: 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # read -r var val 00:10:46.094 05:29:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:46.094 05:29:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # IFS=: 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # read -r var val 00:10:46.094 05:29:49 -- accel/accel.sh@21 -- # val= 00:10:46.094 05:29:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # IFS=: 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # read -r var val 00:10:46.094 05:29:49 -- accel/accel.sh@21 -- # val=software 00:10:46.094 05:29:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.094 05:29:49 -- accel/accel.sh@23 -- # accel_module=software 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # IFS=: 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # read -r var val 00:10:46.094 05:29:49 -- accel/accel.sh@21 -- # val=32 00:10:46.094 05:29:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # IFS=: 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # read -r var val 00:10:46.094 05:29:49 -- accel/accel.sh@21 -- # val=32 00:10:46.094 05:29:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # IFS=: 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # read -r var val 00:10:46.094 05:29:49 -- accel/accel.sh@21 -- # val=1 00:10:46.094 05:29:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # IFS=: 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # read -r var val 00:10:46.094 05:29:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:46.094 05:29:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # IFS=: 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # read -r var val 00:10:46.094 05:29:49 -- accel/accel.sh@21 -- # val=Yes 00:10:46.094 05:29:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # IFS=: 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # read -r var val 00:10:46.094 05:29:49 -- accel/accel.sh@21 -- # val= 00:10:46.094 05:29:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # IFS=: 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # read -r var val 00:10:46.094 05:29:49 -- accel/accel.sh@21 -- # val= 00:10:46.094 05:29:49 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # IFS=: 00:10:46.094 05:29:49 -- accel/accel.sh@20 -- # read -r var val 00:10:47.997 05:29:51 -- accel/accel.sh@21 -- # val= 00:10:47.997 05:29:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.997 05:29:51 -- accel/accel.sh@20 -- # IFS=: 00:10:47.997 05:29:51 -- accel/accel.sh@20 -- # read -r var val 00:10:47.997 05:29:51 -- accel/accel.sh@21 -- # val= 00:10:47.997 05:29:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.997 05:29:51 -- accel/accel.sh@20 -- # IFS=: 00:10:47.997 05:29:51 -- accel/accel.sh@20 -- # read -r var val 00:10:47.997 05:29:51 -- accel/accel.sh@21 -- # val= 00:10:47.997 05:29:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.997 05:29:51 -- accel/accel.sh@20 -- # IFS=: 00:10:47.997 05:29:51 -- accel/accel.sh@20 -- # read -r var val 00:10:47.997 05:29:51 -- accel/accel.sh@21 -- # val= 00:10:47.997 05:29:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.997 05:29:51 -- accel/accel.sh@20 -- # IFS=: 00:10:47.997 05:29:51 -- accel/accel.sh@20 -- # read -r var val 00:10:47.997 05:29:51 -- accel/accel.sh@21 -- # val= 00:10:47.997 05:29:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.997 05:29:51 -- accel/accel.sh@20 -- # IFS=: 00:10:47.997 05:29:51 -- accel/accel.sh@20 -- # read -r var val 00:10:47.997 05:29:51 -- accel/accel.sh@21 -- # val= 00:10:47.997 05:29:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.997 05:29:51 -- accel/accel.sh@20 -- # IFS=: 00:10:47.997 05:29:51 -- accel/accel.sh@20 -- # read -r var val 00:10:47.997 05:29:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:47.997 05:29:51 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:10:47.997 05:29:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:47.997 00:10:47.997 real 0m4.614s 00:10:47.997 user 0m4.103s 00:10:47.997 sys 0m0.336s 00:10:47.997 05:29:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:47.997 05:29:51 -- common/autotest_common.sh@10 -- # set +x 00:10:47.997 ************************************ 00:10:47.997 END TEST accel_xor 00:10:47.997 ************************************ 00:10:47.997 05:29:51 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:10:47.997 05:29:51 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:10:47.997 05:29:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:47.997 05:29:51 -- common/autotest_common.sh@10 -- # set +x 00:10:47.997 ************************************ 00:10:47.997 START TEST accel_dif_verify 00:10:47.997 ************************************ 00:10:47.997 05:29:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:10:47.997 05:29:51 -- accel/accel.sh@16 -- # local accel_opc 00:10:47.997 05:29:51 -- accel/accel.sh@17 -- # local accel_module 00:10:47.997 05:29:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:10:47.997 05:29:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:47.997 05:29:51 -- accel/accel.sh@12 -- # build_accel_config 00:10:47.997 05:29:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:47.997 05:29:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:47.997 05:29:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:47.997 05:29:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:47.997 05:29:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:47.997 05:29:51 -- accel/accel.sh@41 -- # local IFS=, 00:10:47.997 05:29:51 -- accel/accel.sh@42 -- # jq -r . 00:10:47.997 [2024-10-07 05:29:51.821788] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:47.997 [2024-10-07 05:29:51.822880] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111210 ] 00:10:48.255 [2024-10-07 05:29:51.994680] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.255 [2024-10-07 05:29:52.176896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.156 05:29:54 -- accel/accel.sh@18 -- # out=' 00:10:50.156 SPDK Configuration: 00:10:50.156 Core mask: 0x1 00:10:50.156 00:10:50.156 Accel Perf Configuration: 00:10:50.156 Workload Type: dif_verify 00:10:50.156 Vector size: 4096 bytes 00:10:50.156 Transfer size: 4096 bytes 00:10:50.156 Block size: 512 bytes 00:10:50.156 Metadata size: 8 bytes 00:10:50.156 Vector count 1 00:10:50.156 Module: software 00:10:50.156 Queue depth: 32 00:10:50.156 Allocate depth: 32 00:10:50.156 # threads/core: 1 00:10:50.156 Run time: 1 seconds 00:10:50.156 Verify: No 00:10:50.156 00:10:50.156 Running for 1 seconds... 00:10:50.156 00:10:50.156 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:50.156 ------------------------------------------------------------------------------------ 00:10:50.156 0,0 114816/s 455 MiB/s 0 0 00:10:50.156 ==================================================================================== 00:10:50.156 Total 114816/s 448 MiB/s 0 0' 00:10:50.156 05:29:54 -- accel/accel.sh@20 -- # IFS=: 00:10:50.156 05:29:54 -- accel/accel.sh@20 -- # read -r var val 00:10:50.156 05:29:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:10:50.156 05:29:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:50.156 05:29:54 -- accel/accel.sh@12 -- # build_accel_config 00:10:50.156 05:29:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:50.156 05:29:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:50.156 05:29:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:50.156 05:29:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:50.156 05:29:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:50.156 05:29:54 -- accel/accel.sh@41 -- # local IFS=, 00:10:50.156 05:29:54 -- accel/accel.sh@42 -- # jq -r . 00:10:50.156 [2024-10-07 05:29:54.120316] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:50.156 [2024-10-07 05:29:54.120507] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111327 ] 00:10:50.415 [2024-10-07 05:29:54.280129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.673 [2024-10-07 05:29:54.470811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.932 05:29:54 -- accel/accel.sh@21 -- # val= 00:10:50.932 05:29:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # IFS=: 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # read -r var val 00:10:50.932 05:29:54 -- accel/accel.sh@21 -- # val= 00:10:50.932 05:29:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # IFS=: 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # read -r var val 00:10:50.932 05:29:54 -- accel/accel.sh@21 -- # val=0x1 00:10:50.932 05:29:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # IFS=: 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # read -r var val 00:10:50.932 05:29:54 -- accel/accel.sh@21 -- # val= 00:10:50.932 05:29:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # IFS=: 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # read -r var val 00:10:50.932 05:29:54 -- accel/accel.sh@21 -- # val= 00:10:50.932 05:29:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # IFS=: 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # read -r var val 00:10:50.932 05:29:54 -- accel/accel.sh@21 -- # val=dif_verify 00:10:50.932 05:29:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.932 05:29:54 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # IFS=: 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # read -r var val 00:10:50.932 05:29:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:50.932 05:29:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # IFS=: 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # read -r var val 00:10:50.932 05:29:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:50.932 05:29:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # IFS=: 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # read -r var val 00:10:50.932 05:29:54 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:50.932 05:29:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # IFS=: 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # read -r var val 00:10:50.932 05:29:54 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:50.932 05:29:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # IFS=: 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # read -r var val 00:10:50.932 05:29:54 -- accel/accel.sh@21 -- # val= 00:10:50.932 05:29:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # IFS=: 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # read -r var val 00:10:50.932 05:29:54 -- accel/accel.sh@21 -- # val=software 00:10:50.932 05:29:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.932 05:29:54 -- accel/accel.sh@23 -- # accel_module=software 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # IFS=: 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # read -r var val 00:10:50.932 05:29:54 -- accel/accel.sh@21 -- # val=32 00:10:50.932 05:29:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # IFS=: 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # read -r var val 00:10:50.932 05:29:54 -- accel/accel.sh@21 -- # val=32 00:10:50.932 05:29:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # IFS=: 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # read -r var val 00:10:50.932 05:29:54 -- accel/accel.sh@21 -- # val=1 00:10:50.932 05:29:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # IFS=: 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # read -r var val 00:10:50.932 05:29:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:50.932 05:29:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # IFS=: 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # read -r var val 00:10:50.932 05:29:54 -- accel/accel.sh@21 -- # val=No 00:10:50.932 05:29:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # IFS=: 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # read -r var val 00:10:50.932 05:29:54 -- accel/accel.sh@21 -- # val= 00:10:50.932 05:29:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # IFS=: 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # read -r var val 00:10:50.932 05:29:54 -- accel/accel.sh@21 -- # val= 00:10:50.932 05:29:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # IFS=: 00:10:50.932 05:29:54 -- accel/accel.sh@20 -- # read -r var val 00:10:52.835 05:29:56 -- accel/accel.sh@21 -- # val= 00:10:52.835 05:29:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.835 05:29:56 -- accel/accel.sh@20 -- # IFS=: 00:10:52.835 05:29:56 -- accel/accel.sh@20 -- # read -r var val 00:10:52.835 05:29:56 -- accel/accel.sh@21 -- # val= 00:10:52.835 05:29:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.835 05:29:56 -- accel/accel.sh@20 -- # IFS=: 00:10:52.835 05:29:56 -- accel/accel.sh@20 -- # read -r var val 00:10:52.835 05:29:56 -- accel/accel.sh@21 -- # val= 00:10:52.835 05:29:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.835 05:29:56 -- accel/accel.sh@20 -- # IFS=: 00:10:52.835 05:29:56 -- accel/accel.sh@20 -- # read -r var val 00:10:52.835 05:29:56 -- accel/accel.sh@21 -- # val= 00:10:52.835 05:29:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.835 05:29:56 -- accel/accel.sh@20 -- # IFS=: 00:10:52.835 05:29:56 -- accel/accel.sh@20 -- # read -r var val 00:10:52.835 05:29:56 -- accel/accel.sh@21 -- # val= 00:10:52.835 05:29:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.835 05:29:56 -- accel/accel.sh@20 -- # IFS=: 00:10:52.835 05:29:56 -- accel/accel.sh@20 -- # read -r var val 00:10:52.835 05:29:56 -- accel/accel.sh@21 -- # val= 00:10:52.835 05:29:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.835 05:29:56 -- accel/accel.sh@20 -- # IFS=: 00:10:52.835 05:29:56 -- accel/accel.sh@20 -- # read -r var val 00:10:52.835 05:29:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:52.835 05:29:56 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:10:52.835 05:29:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:52.835 00:10:52.835 real 0m4.627s 00:10:52.835 user 0m4.069s 00:10:52.835 sys 0m0.366s 00:10:52.835 05:29:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:52.835 ************************************ 00:10:52.835 END TEST accel_dif_verify 00:10:52.835 ************************************ 00:10:52.835 05:29:56 -- common/autotest_common.sh@10 -- # set +x 00:10:52.835 05:29:56 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:10:52.835 05:29:56 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:10:52.835 05:29:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:52.835 05:29:56 -- common/autotest_common.sh@10 -- # set +x 00:10:52.835 ************************************ 00:10:52.835 START TEST accel_dif_generate 00:10:52.835 ************************************ 00:10:52.835 05:29:56 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:10:52.835 05:29:56 -- accel/accel.sh@16 -- # local accel_opc 00:10:52.835 05:29:56 -- accel/accel.sh@17 -- # local accel_module 00:10:52.835 05:29:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:10:52.835 05:29:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:52.835 05:29:56 -- accel/accel.sh@12 -- # build_accel_config 00:10:52.835 05:29:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:52.835 05:29:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:52.835 05:29:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:52.835 05:29:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:52.835 05:29:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:52.835 05:29:56 -- accel/accel.sh@41 -- # local IFS=, 00:10:52.835 05:29:56 -- accel/accel.sh@42 -- # jq -r . 00:10:52.835 [2024-10-07 05:29:56.497795] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:52.835 [2024-10-07 05:29:56.497983] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111416 ] 00:10:52.835 [2024-10-07 05:29:56.663659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.094 [2024-10-07 05:29:56.850892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.030 05:29:58 -- accel/accel.sh@18 -- # out=' 00:10:55.030 SPDK Configuration: 00:10:55.030 Core mask: 0x1 00:10:55.030 00:10:55.030 Accel Perf Configuration: 00:10:55.030 Workload Type: dif_generate 00:10:55.030 Vector size: 4096 bytes 00:10:55.030 Transfer size: 4096 bytes 00:10:55.030 Block size: 512 bytes 00:10:55.030 Metadata size: 8 bytes 00:10:55.030 Vector count 1 00:10:55.030 Module: software 00:10:55.030 Queue depth: 32 00:10:55.030 Allocate depth: 32 00:10:55.030 # threads/core: 1 00:10:55.030 Run time: 1 seconds 00:10:55.030 Verify: No 00:10:55.030 00:10:55.030 Running for 1 seconds... 00:10:55.030 00:10:55.030 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:55.030 ------------------------------------------------------------------------------------ 00:10:55.030 0,0 140928/s 559 MiB/s 0 0 00:10:55.030 ==================================================================================== 00:10:55.030 Total 140928/s 550 MiB/s 0 0' 00:10:55.030 05:29:58 -- accel/accel.sh@20 -- # IFS=: 00:10:55.030 05:29:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:10:55.030 05:29:58 -- accel/accel.sh@20 -- # read -r var val 00:10:55.030 05:29:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:55.030 05:29:58 -- accel/accel.sh@12 -- # build_accel_config 00:10:55.030 05:29:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:55.030 05:29:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:55.030 05:29:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:55.030 05:29:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:55.030 05:29:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:55.030 05:29:58 -- accel/accel.sh@41 -- # local IFS=, 00:10:55.030 05:29:58 -- accel/accel.sh@42 -- # jq -r . 00:10:55.030 [2024-10-07 05:29:58.794025] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:55.030 [2024-10-07 05:29:58.794727] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111509 ] 00:10:55.030 [2024-10-07 05:29:58.960259] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.288 [2024-10-07 05:29:59.174288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.546 05:29:59 -- accel/accel.sh@21 -- # val= 00:10:55.546 05:29:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.546 05:29:59 -- accel/accel.sh@20 -- # IFS=: 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # read -r var val 00:10:55.547 05:29:59 -- accel/accel.sh@21 -- # val= 00:10:55.547 05:29:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # IFS=: 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # read -r var val 00:10:55.547 05:29:59 -- accel/accel.sh@21 -- # val=0x1 00:10:55.547 05:29:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # IFS=: 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # read -r var val 00:10:55.547 05:29:59 -- accel/accel.sh@21 -- # val= 00:10:55.547 05:29:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # IFS=: 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # read -r var val 00:10:55.547 05:29:59 -- accel/accel.sh@21 -- # val= 00:10:55.547 05:29:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # IFS=: 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # read -r var val 00:10:55.547 05:29:59 -- accel/accel.sh@21 -- # val=dif_generate 00:10:55.547 05:29:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.547 05:29:59 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # IFS=: 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # read -r var val 00:10:55.547 05:29:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:55.547 05:29:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # IFS=: 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # read -r var val 00:10:55.547 05:29:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:55.547 05:29:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # IFS=: 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # read -r var val 00:10:55.547 05:29:59 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:55.547 05:29:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # IFS=: 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # read -r var val 00:10:55.547 05:29:59 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:55.547 05:29:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # IFS=: 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # read -r var val 00:10:55.547 05:29:59 -- accel/accel.sh@21 -- # val= 00:10:55.547 05:29:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # IFS=: 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # read -r var val 00:10:55.547 05:29:59 -- accel/accel.sh@21 -- # val=software 00:10:55.547 05:29:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.547 05:29:59 -- accel/accel.sh@23 -- # accel_module=software 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # IFS=: 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # read -r var val 00:10:55.547 05:29:59 -- accel/accel.sh@21 -- # val=32 00:10:55.547 05:29:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # IFS=: 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # read -r var val 00:10:55.547 05:29:59 -- accel/accel.sh@21 -- # val=32 00:10:55.547 05:29:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # IFS=: 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # read -r var val 00:10:55.547 05:29:59 -- accel/accel.sh@21 -- # val=1 00:10:55.547 05:29:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # IFS=: 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # read -r var val 00:10:55.547 05:29:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:55.547 05:29:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # IFS=: 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # read -r var val 00:10:55.547 05:29:59 -- accel/accel.sh@21 -- # val=No 00:10:55.547 05:29:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # IFS=: 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # read -r var val 00:10:55.547 05:29:59 -- accel/accel.sh@21 -- # val= 00:10:55.547 05:29:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # IFS=: 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # read -r var val 00:10:55.547 05:29:59 -- accel/accel.sh@21 -- # val= 00:10:55.547 05:29:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # IFS=: 00:10:55.547 05:29:59 -- accel/accel.sh@20 -- # read -r var val 00:10:57.448 05:30:01 -- accel/accel.sh@21 -- # val= 00:10:57.448 05:30:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.448 05:30:01 -- accel/accel.sh@20 -- # IFS=: 00:10:57.448 05:30:01 -- accel/accel.sh@20 -- # read -r var val 00:10:57.448 05:30:01 -- accel/accel.sh@21 -- # val= 00:10:57.448 05:30:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.448 05:30:01 -- accel/accel.sh@20 -- # IFS=: 00:10:57.448 05:30:01 -- accel/accel.sh@20 -- # read -r var val 00:10:57.448 05:30:01 -- accel/accel.sh@21 -- # val= 00:10:57.448 05:30:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.448 05:30:01 -- accel/accel.sh@20 -- # IFS=: 00:10:57.448 05:30:01 -- accel/accel.sh@20 -- # read -r var val 00:10:57.448 05:30:01 -- accel/accel.sh@21 -- # val= 00:10:57.448 05:30:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.448 05:30:01 -- accel/accel.sh@20 -- # IFS=: 00:10:57.448 05:30:01 -- accel/accel.sh@20 -- # read -r var val 00:10:57.448 05:30:01 -- accel/accel.sh@21 -- # val= 00:10:57.448 05:30:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.448 05:30:01 -- accel/accel.sh@20 -- # IFS=: 00:10:57.448 05:30:01 -- accel/accel.sh@20 -- # read -r var val 00:10:57.448 05:30:01 -- accel/accel.sh@21 -- # val= 00:10:57.448 05:30:01 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.448 05:30:01 -- accel/accel.sh@20 -- # IFS=: 00:10:57.448 05:30:01 -- accel/accel.sh@20 -- # read -r var val 00:10:57.448 05:30:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:57.448 05:30:01 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:10:57.448 05:30:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:57.448 00:10:57.448 real 0m4.639s 00:10:57.448 user 0m4.151s 00:10:57.448 sys 0m0.324s 00:10:57.448 05:30:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:57.448 05:30:01 -- common/autotest_common.sh@10 -- # set +x 00:10:57.448 ************************************ 00:10:57.448 END TEST accel_dif_generate 00:10:57.448 ************************************ 00:10:57.448 05:30:01 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:10:57.448 05:30:01 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:10:57.448 05:30:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:57.448 05:30:01 -- common/autotest_common.sh@10 -- # set +x 00:10:57.448 ************************************ 00:10:57.448 START TEST accel_dif_generate_copy 00:10:57.448 ************************************ 00:10:57.448 05:30:01 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:10:57.448 05:30:01 -- accel/accel.sh@16 -- # local accel_opc 00:10:57.448 05:30:01 -- accel/accel.sh@17 -- # local accel_module 00:10:57.448 05:30:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:10:57.449 05:30:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:57.449 05:30:01 -- accel/accel.sh@12 -- # build_accel_config 00:10:57.449 05:30:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:57.449 05:30:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:57.449 05:30:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:57.449 05:30:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:57.449 05:30:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:57.449 05:30:01 -- accel/accel.sh@41 -- # local IFS=, 00:10:57.449 05:30:01 -- accel/accel.sh@42 -- # jq -r . 00:10:57.449 [2024-10-07 05:30:01.188610] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:57.449 [2024-10-07 05:30:01.189494] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111686 ] 00:10:57.449 [2024-10-07 05:30:01.355764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.707 [2024-10-07 05:30:01.529920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.605 05:30:03 -- accel/accel.sh@18 -- # out=' 00:10:59.605 SPDK Configuration: 00:10:59.605 Core mask: 0x1 00:10:59.605 00:10:59.605 Accel Perf Configuration: 00:10:59.605 Workload Type: dif_generate_copy 00:10:59.605 Vector size: 4096 bytes 00:10:59.605 Transfer size: 4096 bytes 00:10:59.605 Vector count 1 00:10:59.605 Module: software 00:10:59.605 Queue depth: 32 00:10:59.605 Allocate depth: 32 00:10:59.605 # threads/core: 1 00:10:59.605 Run time: 1 seconds 00:10:59.605 Verify: No 00:10:59.605 00:10:59.605 Running for 1 seconds... 00:10:59.605 00:10:59.605 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:59.605 ------------------------------------------------------------------------------------ 00:10:59.605 0,0 107200/s 425 MiB/s 0 0 00:10:59.605 ==================================================================================== 00:10:59.605 Total 107200/s 418 MiB/s 0 0' 00:10:59.605 05:30:03 -- accel/accel.sh@20 -- # IFS=: 00:10:59.605 05:30:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:10:59.605 05:30:03 -- accel/accel.sh@20 -- # read -r var val 00:10:59.605 05:30:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:59.605 05:30:03 -- accel/accel.sh@12 -- # build_accel_config 00:10:59.605 05:30:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:59.605 05:30:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:59.605 05:30:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:59.605 05:30:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:59.605 05:30:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:59.605 05:30:03 -- accel/accel.sh@41 -- # local IFS=, 00:10:59.605 05:30:03 -- accel/accel.sh@42 -- # jq -r . 00:10:59.605 [2024-10-07 05:30:03.460092] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:59.605 [2024-10-07 05:30:03.460256] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111751 ] 00:10:59.863 [2024-10-07 05:30:03.613900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.863 [2024-10-07 05:30:03.802397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.122 05:30:03 -- accel/accel.sh@21 -- # val= 00:11:00.122 05:30:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # IFS=: 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # read -r var val 00:11:00.122 05:30:03 -- accel/accel.sh@21 -- # val= 00:11:00.122 05:30:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # IFS=: 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # read -r var val 00:11:00.122 05:30:03 -- accel/accel.sh@21 -- # val=0x1 00:11:00.122 05:30:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # IFS=: 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # read -r var val 00:11:00.122 05:30:03 -- accel/accel.sh@21 -- # val= 00:11:00.122 05:30:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # IFS=: 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # read -r var val 00:11:00.122 05:30:03 -- accel/accel.sh@21 -- # val= 00:11:00.122 05:30:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # IFS=: 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # read -r var val 00:11:00.122 05:30:03 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:11:00.122 05:30:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.122 05:30:03 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # IFS=: 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # read -r var val 00:11:00.122 05:30:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:00.122 05:30:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # IFS=: 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # read -r var val 00:11:00.122 05:30:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:00.122 05:30:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # IFS=: 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # read -r var val 00:11:00.122 05:30:03 -- accel/accel.sh@21 -- # val= 00:11:00.122 05:30:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # IFS=: 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # read -r var val 00:11:00.122 05:30:03 -- accel/accel.sh@21 -- # val=software 00:11:00.122 05:30:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.122 05:30:03 -- accel/accel.sh@23 -- # accel_module=software 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # IFS=: 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # read -r var val 00:11:00.122 05:30:03 -- accel/accel.sh@21 -- # val=32 00:11:00.122 05:30:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # IFS=: 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # read -r var val 00:11:00.122 05:30:03 -- accel/accel.sh@21 -- # val=32 00:11:00.122 05:30:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # IFS=: 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # read -r var val 00:11:00.122 05:30:03 -- accel/accel.sh@21 -- # val=1 00:11:00.122 05:30:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # IFS=: 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # read -r var val 00:11:00.122 05:30:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:00.122 05:30:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # IFS=: 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # read -r var val 00:11:00.122 05:30:03 -- accel/accel.sh@21 -- # val=No 00:11:00.122 05:30:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # IFS=: 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # read -r var val 00:11:00.122 05:30:03 -- accel/accel.sh@21 -- # val= 00:11:00.122 05:30:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # IFS=: 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # read -r var val 00:11:00.122 05:30:03 -- accel/accel.sh@21 -- # val= 00:11:00.122 05:30:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # IFS=: 00:11:00.122 05:30:03 -- accel/accel.sh@20 -- # read -r var val 00:11:02.025 05:30:05 -- accel/accel.sh@21 -- # val= 00:11:02.025 05:30:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.025 05:30:05 -- accel/accel.sh@20 -- # IFS=: 00:11:02.025 05:30:05 -- accel/accel.sh@20 -- # read -r var val 00:11:02.025 05:30:05 -- accel/accel.sh@21 -- # val= 00:11:02.025 05:30:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.025 05:30:05 -- accel/accel.sh@20 -- # IFS=: 00:11:02.025 05:30:05 -- accel/accel.sh@20 -- # read -r var val 00:11:02.025 05:30:05 -- accel/accel.sh@21 -- # val= 00:11:02.025 05:30:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.025 05:30:05 -- accel/accel.sh@20 -- # IFS=: 00:11:02.025 05:30:05 -- accel/accel.sh@20 -- # read -r var val 00:11:02.025 05:30:05 -- accel/accel.sh@21 -- # val= 00:11:02.025 05:30:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.025 05:30:05 -- accel/accel.sh@20 -- # IFS=: 00:11:02.025 05:30:05 -- accel/accel.sh@20 -- # read -r var val 00:11:02.025 05:30:05 -- accel/accel.sh@21 -- # val= 00:11:02.025 05:30:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.025 05:30:05 -- accel/accel.sh@20 -- # IFS=: 00:11:02.025 05:30:05 -- accel/accel.sh@20 -- # read -r var val 00:11:02.025 05:30:05 -- accel/accel.sh@21 -- # val= 00:11:02.025 05:30:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.025 05:30:05 -- accel/accel.sh@20 -- # IFS=: 00:11:02.025 05:30:05 -- accel/accel.sh@20 -- # read -r var val 00:11:02.025 05:30:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:02.025 05:30:05 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:11:02.025 ************************************ 00:11:02.025 END TEST accel_dif_generate_copy 00:11:02.025 ************************************ 00:11:02.025 05:30:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:02.025 00:11:02.025 real 0m4.568s 00:11:02.025 user 0m4.062s 00:11:02.025 sys 0m0.347s 00:11:02.025 05:30:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:02.025 05:30:05 -- common/autotest_common.sh@10 -- # set +x 00:11:02.025 05:30:05 -- accel/accel.sh@107 -- # [[ y == y ]] 00:11:02.025 05:30:05 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:02.025 05:30:05 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:11:02.025 05:30:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:02.025 05:30:05 -- common/autotest_common.sh@10 -- # set +x 00:11:02.025 ************************************ 00:11:02.025 START TEST accel_comp 00:11:02.025 ************************************ 00:11:02.025 05:30:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:02.025 05:30:05 -- accel/accel.sh@16 -- # local accel_opc 00:11:02.025 05:30:05 -- accel/accel.sh@17 -- # local accel_module 00:11:02.025 05:30:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:02.025 05:30:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:02.025 05:30:05 -- accel/accel.sh@12 -- # build_accel_config 00:11:02.025 05:30:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:02.025 05:30:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:02.025 05:30:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:02.025 05:30:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:02.025 05:30:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:02.025 05:30:05 -- accel/accel.sh@41 -- # local IFS=, 00:11:02.025 05:30:05 -- accel/accel.sh@42 -- # jq -r . 00:11:02.025 [2024-10-07 05:30:05.806930] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:02.025 [2024-10-07 05:30:05.807115] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111859 ] 00:11:02.025 [2024-10-07 05:30:05.974775] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.284 [2024-10-07 05:30:06.132488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.186 05:30:08 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:04.186 00:11:04.186 SPDK Configuration: 00:11:04.186 Core mask: 0x1 00:11:04.186 00:11:04.186 Accel Perf Configuration: 00:11:04.186 Workload Type: compress 00:11:04.186 Transfer size: 4096 bytes 00:11:04.186 Vector count 1 00:11:04.186 Module: software 00:11:04.186 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:04.186 Queue depth: 32 00:11:04.186 Allocate depth: 32 00:11:04.186 # threads/core: 1 00:11:04.186 Run time: 1 seconds 00:11:04.186 Verify: No 00:11:04.186 00:11:04.186 Running for 1 seconds... 00:11:04.186 00:11:04.186 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:04.186 ------------------------------------------------------------------------------------ 00:11:04.186 0,0 58624/s 244 MiB/s 0 0 00:11:04.186 ==================================================================================== 00:11:04.186 Total 58624/s 229 MiB/s 0 0' 00:11:04.186 05:30:08 -- accel/accel.sh@20 -- # IFS=: 00:11:04.186 05:30:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:04.186 05:30:08 -- accel/accel.sh@20 -- # read -r var val 00:11:04.186 05:30:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:04.186 05:30:08 -- accel/accel.sh@12 -- # build_accel_config 00:11:04.186 05:30:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:04.186 05:30:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:04.186 05:30:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:04.186 05:30:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:04.186 05:30:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:04.186 05:30:08 -- accel/accel.sh@41 -- # local IFS=, 00:11:04.186 05:30:08 -- accel/accel.sh@42 -- # jq -r . 00:11:04.186 [2024-10-07 05:30:08.114923] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:04.186 [2024-10-07 05:30:08.115062] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111927 ] 00:11:04.445 [2024-10-07 05:30:08.272430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.703 [2024-10-07 05:30:08.470799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.961 05:30:08 -- accel/accel.sh@21 -- # val= 00:11:04.961 05:30:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.961 05:30:08 -- accel/accel.sh@20 -- # IFS=: 00:11:04.961 05:30:08 -- accel/accel.sh@20 -- # read -r var val 00:11:04.961 05:30:08 -- accel/accel.sh@21 -- # val= 00:11:04.961 05:30:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.961 05:30:08 -- accel/accel.sh@20 -- # IFS=: 00:11:04.961 05:30:08 -- accel/accel.sh@20 -- # read -r var val 00:11:04.961 05:30:08 -- accel/accel.sh@21 -- # val= 00:11:04.961 05:30:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.961 05:30:08 -- accel/accel.sh@20 -- # IFS=: 00:11:04.961 05:30:08 -- accel/accel.sh@20 -- # read -r var val 00:11:04.961 05:30:08 -- accel/accel.sh@21 -- # val=0x1 00:11:04.961 05:30:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.961 05:30:08 -- accel/accel.sh@20 -- # IFS=: 00:11:04.961 05:30:08 -- accel/accel.sh@20 -- # read -r var val 00:11:04.961 05:30:08 -- accel/accel.sh@21 -- # val= 00:11:04.961 05:30:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.961 05:30:08 -- accel/accel.sh@20 -- # IFS=: 00:11:04.961 05:30:08 -- accel/accel.sh@20 -- # read -r var val 00:11:04.961 05:30:08 -- accel/accel.sh@21 -- # val= 00:11:04.961 05:30:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.961 05:30:08 -- accel/accel.sh@20 -- # IFS=: 00:11:04.961 05:30:08 -- accel/accel.sh@20 -- # read -r var val 00:11:04.961 05:30:08 -- accel/accel.sh@21 -- # val=compress 00:11:04.962 05:30:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.962 05:30:08 -- accel/accel.sh@24 -- # accel_opc=compress 00:11:04.962 05:30:08 -- accel/accel.sh@20 -- # IFS=: 00:11:04.962 05:30:08 -- accel/accel.sh@20 -- # read -r var val 00:11:04.962 05:30:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:04.962 05:30:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.962 05:30:08 -- accel/accel.sh@20 -- # IFS=: 00:11:04.962 05:30:08 -- accel/accel.sh@20 -- # read -r var val 00:11:04.962 05:30:08 -- accel/accel.sh@21 -- # val= 00:11:04.962 05:30:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.962 05:30:08 -- accel/accel.sh@20 -- # IFS=: 00:11:04.962 05:30:08 -- accel/accel.sh@20 -- # read -r var val 00:11:04.962 05:30:08 -- accel/accel.sh@21 -- # val=software 00:11:04.962 05:30:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.962 05:30:08 -- accel/accel.sh@23 -- # accel_module=software 00:11:04.962 05:30:08 -- accel/accel.sh@20 -- # IFS=: 00:11:04.962 05:30:08 -- accel/accel.sh@20 -- # read -r var val 00:11:04.962 05:30:08 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:04.962 05:30:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.962 05:30:08 -- accel/accel.sh@20 -- # IFS=: 00:11:04.962 05:30:08 -- accel/accel.sh@20 -- # read -r var val 00:11:04.962 05:30:08 -- accel/accel.sh@21 -- # val=32 00:11:04.962 05:30:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.962 05:30:08 -- accel/accel.sh@20 -- # IFS=: 00:11:04.962 05:30:08 -- accel/accel.sh@20 -- # read -r var val 00:11:04.962 05:30:08 -- accel/accel.sh@21 -- # val=32 00:11:04.962 05:30:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.962 05:30:08 -- accel/accel.sh@20 -- # IFS=: 00:11:04.962 05:30:08 -- accel/accel.sh@20 -- # read -r var val 00:11:04.962 05:30:08 -- accel/accel.sh@21 -- # val=1 00:11:04.962 05:30:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.962 05:30:08 -- accel/accel.sh@20 -- # IFS=: 00:11:04.962 05:30:08 -- accel/accel.sh@20 -- # read -r var val 00:11:04.962 05:30:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:04.962 05:30:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.962 05:30:08 -- accel/accel.sh@20 -- # IFS=: 00:11:04.962 05:30:08 -- accel/accel.sh@20 -- # read -r var val 00:11:04.962 05:30:08 -- accel/accel.sh@21 -- # val=No 00:11:04.962 05:30:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.962 05:30:08 -- accel/accel.sh@20 -- # IFS=: 00:11:04.962 05:30:08 -- accel/accel.sh@20 -- # read -r var val 00:11:04.962 05:30:08 -- accel/accel.sh@21 -- # val= 00:11:04.962 05:30:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.962 05:30:08 -- accel/accel.sh@20 -- # IFS=: 00:11:04.962 05:30:08 -- accel/accel.sh@20 -- # read -r var val 00:11:04.962 05:30:08 -- accel/accel.sh@21 -- # val= 00:11:04.962 05:30:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.962 05:30:08 -- accel/accel.sh@20 -- # IFS=: 00:11:04.962 05:30:08 -- accel/accel.sh@20 -- # read -r var val 00:11:06.865 05:30:10 -- accel/accel.sh@21 -- # val= 00:11:06.865 05:30:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.865 05:30:10 -- accel/accel.sh@20 -- # IFS=: 00:11:06.865 05:30:10 -- accel/accel.sh@20 -- # read -r var val 00:11:06.865 05:30:10 -- accel/accel.sh@21 -- # val= 00:11:06.865 05:30:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.865 05:30:10 -- accel/accel.sh@20 -- # IFS=: 00:11:06.865 05:30:10 -- accel/accel.sh@20 -- # read -r var val 00:11:06.865 05:30:10 -- accel/accel.sh@21 -- # val= 00:11:06.865 05:30:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.865 05:30:10 -- accel/accel.sh@20 -- # IFS=: 00:11:06.865 05:30:10 -- accel/accel.sh@20 -- # read -r var val 00:11:06.865 05:30:10 -- accel/accel.sh@21 -- # val= 00:11:06.865 05:30:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.865 05:30:10 -- accel/accel.sh@20 -- # IFS=: 00:11:06.865 05:30:10 -- accel/accel.sh@20 -- # read -r var val 00:11:06.865 05:30:10 -- accel/accel.sh@21 -- # val= 00:11:06.865 05:30:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.865 05:30:10 -- accel/accel.sh@20 -- # IFS=: 00:11:06.865 05:30:10 -- accel/accel.sh@20 -- # read -r var val 00:11:06.865 05:30:10 -- accel/accel.sh@21 -- # val= 00:11:06.865 05:30:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.865 05:30:10 -- accel/accel.sh@20 -- # IFS=: 00:11:06.865 05:30:10 -- accel/accel.sh@20 -- # read -r var val 00:11:06.865 05:30:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:06.865 05:30:10 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:11:06.865 05:30:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:06.865 00:11:06.865 real 0m4.655s 00:11:06.865 user 0m4.156s 00:11:06.865 sys 0m0.329s 00:11:06.865 ************************************ 00:11:06.865 END TEST accel_comp 00:11:06.865 ************************************ 00:11:06.865 05:30:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.866 05:30:10 -- common/autotest_common.sh@10 -- # set +x 00:11:06.866 05:30:10 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:06.866 05:30:10 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:06.866 05:30:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:06.866 05:30:10 -- common/autotest_common.sh@10 -- # set +x 00:11:06.866 ************************************ 00:11:06.866 START TEST accel_decomp 00:11:06.866 ************************************ 00:11:06.866 05:30:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:06.866 05:30:10 -- accel/accel.sh@16 -- # local accel_opc 00:11:06.866 05:30:10 -- accel/accel.sh@17 -- # local accel_module 00:11:06.866 05:30:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:06.866 05:30:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:06.866 05:30:10 -- accel/accel.sh@12 -- # build_accel_config 00:11:06.866 05:30:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:06.866 05:30:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:06.866 05:30:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:06.866 05:30:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:06.866 05:30:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:06.866 05:30:10 -- accel/accel.sh@41 -- # local IFS=, 00:11:06.866 05:30:10 -- accel/accel.sh@42 -- # jq -r . 00:11:06.866 [2024-10-07 05:30:10.504712] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:06.866 [2024-10-07 05:30:10.505488] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111980 ] 00:11:06.866 [2024-10-07 05:30:10.679713] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.129 [2024-10-07 05:30:10.849447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.086 05:30:12 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:09.086 00:11:09.086 SPDK Configuration: 00:11:09.086 Core mask: 0x1 00:11:09.086 00:11:09.086 Accel Perf Configuration: 00:11:09.086 Workload Type: decompress 00:11:09.086 Transfer size: 4096 bytes 00:11:09.086 Vector count 1 00:11:09.086 Module: software 00:11:09.086 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:09.086 Queue depth: 32 00:11:09.086 Allocate depth: 32 00:11:09.086 # threads/core: 1 00:11:09.086 Run time: 1 seconds 00:11:09.086 Verify: Yes 00:11:09.086 00:11:09.086 Running for 1 seconds... 00:11:09.086 00:11:09.086 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:09.086 ------------------------------------------------------------------------------------ 00:11:09.086 0,0 67840/s 125 MiB/s 0 0 00:11:09.086 ==================================================================================== 00:11:09.086 Total 67840/s 265 MiB/s 0 0' 00:11:09.086 05:30:12 -- accel/accel.sh@20 -- # IFS=: 00:11:09.086 05:30:12 -- accel/accel.sh@20 -- # read -r var val 00:11:09.086 05:30:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:09.086 05:30:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:09.086 05:30:12 -- accel/accel.sh@12 -- # build_accel_config 00:11:09.086 05:30:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:09.086 05:30:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:09.086 05:30:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:09.086 05:30:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:09.086 05:30:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:09.086 05:30:12 -- accel/accel.sh@41 -- # local IFS=, 00:11:09.086 05:30:12 -- accel/accel.sh@42 -- # jq -r . 00:11:09.086 [2024-10-07 05:30:12.868340] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:09.086 [2024-10-07 05:30:12.868518] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112070 ] 00:11:09.086 [2024-10-07 05:30:13.035810] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.345 [2024-10-07 05:30:13.232377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.603 05:30:13 -- accel/accel.sh@21 -- # val= 00:11:09.603 05:30:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # IFS=: 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # read -r var val 00:11:09.603 05:30:13 -- accel/accel.sh@21 -- # val= 00:11:09.603 05:30:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # IFS=: 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # read -r var val 00:11:09.603 05:30:13 -- accel/accel.sh@21 -- # val= 00:11:09.603 05:30:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # IFS=: 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # read -r var val 00:11:09.603 05:30:13 -- accel/accel.sh@21 -- # val=0x1 00:11:09.603 05:30:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # IFS=: 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # read -r var val 00:11:09.603 05:30:13 -- accel/accel.sh@21 -- # val= 00:11:09.603 05:30:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # IFS=: 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # read -r var val 00:11:09.603 05:30:13 -- accel/accel.sh@21 -- # val= 00:11:09.603 05:30:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # IFS=: 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # read -r var val 00:11:09.603 05:30:13 -- accel/accel.sh@21 -- # val=decompress 00:11:09.603 05:30:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.603 05:30:13 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # IFS=: 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # read -r var val 00:11:09.603 05:30:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:09.603 05:30:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # IFS=: 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # read -r var val 00:11:09.603 05:30:13 -- accel/accel.sh@21 -- # val= 00:11:09.603 05:30:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # IFS=: 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # read -r var val 00:11:09.603 05:30:13 -- accel/accel.sh@21 -- # val=software 00:11:09.603 05:30:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.603 05:30:13 -- accel/accel.sh@23 -- # accel_module=software 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # IFS=: 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # read -r var val 00:11:09.603 05:30:13 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:09.603 05:30:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # IFS=: 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # read -r var val 00:11:09.603 05:30:13 -- accel/accel.sh@21 -- # val=32 00:11:09.603 05:30:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # IFS=: 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # read -r var val 00:11:09.603 05:30:13 -- accel/accel.sh@21 -- # val=32 00:11:09.603 05:30:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # IFS=: 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # read -r var val 00:11:09.603 05:30:13 -- accel/accel.sh@21 -- # val=1 00:11:09.603 05:30:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # IFS=: 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # read -r var val 00:11:09.603 05:30:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:09.603 05:30:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # IFS=: 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # read -r var val 00:11:09.603 05:30:13 -- accel/accel.sh@21 -- # val=Yes 00:11:09.603 05:30:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # IFS=: 00:11:09.603 05:30:13 -- accel/accel.sh@20 -- # read -r var val 00:11:09.603 05:30:13 -- accel/accel.sh@21 -- # val= 00:11:09.603 05:30:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.604 05:30:13 -- accel/accel.sh@20 -- # IFS=: 00:11:09.604 05:30:13 -- accel/accel.sh@20 -- # read -r var val 00:11:09.604 05:30:13 -- accel/accel.sh@21 -- # val= 00:11:09.604 05:30:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.604 05:30:13 -- accel/accel.sh@20 -- # IFS=: 00:11:09.604 05:30:13 -- accel/accel.sh@20 -- # read -r var val 00:11:11.507 05:30:15 -- accel/accel.sh@21 -- # val= 00:11:11.507 05:30:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.507 05:30:15 -- accel/accel.sh@20 -- # IFS=: 00:11:11.507 05:30:15 -- accel/accel.sh@20 -- # read -r var val 00:11:11.507 05:30:15 -- accel/accel.sh@21 -- # val= 00:11:11.507 05:30:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.507 05:30:15 -- accel/accel.sh@20 -- # IFS=: 00:11:11.507 05:30:15 -- accel/accel.sh@20 -- # read -r var val 00:11:11.507 05:30:15 -- accel/accel.sh@21 -- # val= 00:11:11.507 05:30:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.507 05:30:15 -- accel/accel.sh@20 -- # IFS=: 00:11:11.507 05:30:15 -- accel/accel.sh@20 -- # read -r var val 00:11:11.507 05:30:15 -- accel/accel.sh@21 -- # val= 00:11:11.507 05:30:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.507 05:30:15 -- accel/accel.sh@20 -- # IFS=: 00:11:11.507 05:30:15 -- accel/accel.sh@20 -- # read -r var val 00:11:11.507 05:30:15 -- accel/accel.sh@21 -- # val= 00:11:11.507 05:30:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.507 05:30:15 -- accel/accel.sh@20 -- # IFS=: 00:11:11.507 05:30:15 -- accel/accel.sh@20 -- # read -r var val 00:11:11.507 05:30:15 -- accel/accel.sh@21 -- # val= 00:11:11.507 05:30:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.507 05:30:15 -- accel/accel.sh@20 -- # IFS=: 00:11:11.507 05:30:15 -- accel/accel.sh@20 -- # read -r var val 00:11:11.507 05:30:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:11.507 05:30:15 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:11.507 05:30:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:11.507 00:11:11.507 real 0m4.721s 00:11:11.507 user 0m4.189s 00:11:11.507 sys 0m0.348s 00:11:11.507 05:30:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:11.507 05:30:15 -- common/autotest_common.sh@10 -- # set +x 00:11:11.507 ************************************ 00:11:11.507 END TEST accel_decomp 00:11:11.507 ************************************ 00:11:11.507 05:30:15 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:11.507 05:30:15 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:11.507 05:30:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:11.507 05:30:15 -- common/autotest_common.sh@10 -- # set +x 00:11:11.507 ************************************ 00:11:11.507 START TEST accel_decmop_full 00:11:11.507 ************************************ 00:11:11.507 05:30:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:11.507 05:30:15 -- accel/accel.sh@16 -- # local accel_opc 00:11:11.507 05:30:15 -- accel/accel.sh@17 -- # local accel_module 00:11:11.507 05:30:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:11.507 05:30:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:11.507 05:30:15 -- accel/accel.sh@12 -- # build_accel_config 00:11:11.507 05:30:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:11.507 05:30:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:11.507 05:30:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:11.507 05:30:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:11.507 05:30:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:11.507 05:30:15 -- accel/accel.sh@41 -- # local IFS=, 00:11:11.507 05:30:15 -- accel/accel.sh@42 -- # jq -r . 00:11:11.507 [2024-10-07 05:30:15.278603] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:11.507 [2024-10-07 05:30:15.279330] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112344 ] 00:11:11.507 [2024-10-07 05:30:15.443057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.766 [2024-10-07 05:30:15.619851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.666 05:30:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:13.666 00:11:13.666 SPDK Configuration: 00:11:13.666 Core mask: 0x1 00:11:13.666 00:11:13.666 Accel Perf Configuration: 00:11:13.666 Workload Type: decompress 00:11:13.666 Transfer size: 111250 bytes 00:11:13.666 Vector count 1 00:11:13.666 Module: software 00:11:13.666 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:13.666 Queue depth: 32 00:11:13.666 Allocate depth: 32 00:11:13.666 # threads/core: 1 00:11:13.666 Run time: 1 seconds 00:11:13.666 Verify: Yes 00:11:13.666 00:11:13.666 Running for 1 seconds... 00:11:13.666 00:11:13.666 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:13.666 ------------------------------------------------------------------------------------ 00:11:13.666 0,0 5088/s 210 MiB/s 0 0 00:11:13.666 ==================================================================================== 00:11:13.666 Total 5088/s 539 MiB/s 0 0' 00:11:13.666 05:30:17 -- accel/accel.sh@20 -- # IFS=: 00:11:13.666 05:30:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:13.666 05:30:17 -- accel/accel.sh@20 -- # read -r var val 00:11:13.666 05:30:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:13.666 05:30:17 -- accel/accel.sh@12 -- # build_accel_config 00:11:13.666 05:30:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:13.666 05:30:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:13.666 05:30:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:13.666 05:30:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:13.667 05:30:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:13.667 05:30:17 -- accel/accel.sh@41 -- # local IFS=, 00:11:13.667 05:30:17 -- accel/accel.sh@42 -- # jq -r . 00:11:13.667 [2024-10-07 05:30:17.617379] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:13.667 [2024-10-07 05:30:17.617948] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112963 ] 00:11:13.925 [2024-10-07 05:30:17.771356] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.183 [2024-10-07 05:30:17.969019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.441 05:30:18 -- accel/accel.sh@21 -- # val= 00:11:14.441 05:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # IFS=: 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # read -r var val 00:11:14.441 05:30:18 -- accel/accel.sh@21 -- # val= 00:11:14.441 05:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # IFS=: 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # read -r var val 00:11:14.441 05:30:18 -- accel/accel.sh@21 -- # val= 00:11:14.441 05:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # IFS=: 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # read -r var val 00:11:14.441 05:30:18 -- accel/accel.sh@21 -- # val=0x1 00:11:14.441 05:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # IFS=: 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # read -r var val 00:11:14.441 05:30:18 -- accel/accel.sh@21 -- # val= 00:11:14.441 05:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # IFS=: 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # read -r var val 00:11:14.441 05:30:18 -- accel/accel.sh@21 -- # val= 00:11:14.441 05:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # IFS=: 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # read -r var val 00:11:14.441 05:30:18 -- accel/accel.sh@21 -- # val=decompress 00:11:14.441 05:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.441 05:30:18 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # IFS=: 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # read -r var val 00:11:14.441 05:30:18 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:14.441 05:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # IFS=: 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # read -r var val 00:11:14.441 05:30:18 -- accel/accel.sh@21 -- # val= 00:11:14.441 05:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # IFS=: 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # read -r var val 00:11:14.441 05:30:18 -- accel/accel.sh@21 -- # val=software 00:11:14.441 05:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.441 05:30:18 -- accel/accel.sh@23 -- # accel_module=software 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # IFS=: 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # read -r var val 00:11:14.441 05:30:18 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:14.441 05:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # IFS=: 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # read -r var val 00:11:14.441 05:30:18 -- accel/accel.sh@21 -- # val=32 00:11:14.441 05:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # IFS=: 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # read -r var val 00:11:14.441 05:30:18 -- accel/accel.sh@21 -- # val=32 00:11:14.441 05:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # IFS=: 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # read -r var val 00:11:14.441 05:30:18 -- accel/accel.sh@21 -- # val=1 00:11:14.441 05:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # IFS=: 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # read -r var val 00:11:14.441 05:30:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:14.441 05:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # IFS=: 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # read -r var val 00:11:14.441 05:30:18 -- accel/accel.sh@21 -- # val=Yes 00:11:14.441 05:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # IFS=: 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # read -r var val 00:11:14.441 05:30:18 -- accel/accel.sh@21 -- # val= 00:11:14.441 05:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # IFS=: 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # read -r var val 00:11:14.441 05:30:18 -- accel/accel.sh@21 -- # val= 00:11:14.441 05:30:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # IFS=: 00:11:14.441 05:30:18 -- accel/accel.sh@20 -- # read -r var val 00:11:16.343 05:30:19 -- accel/accel.sh@21 -- # val= 00:11:16.343 05:30:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.343 05:30:19 -- accel/accel.sh@20 -- # IFS=: 00:11:16.343 05:30:19 -- accel/accel.sh@20 -- # read -r var val 00:11:16.343 05:30:19 -- accel/accel.sh@21 -- # val= 00:11:16.343 05:30:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.343 05:30:19 -- accel/accel.sh@20 -- # IFS=: 00:11:16.343 05:30:19 -- accel/accel.sh@20 -- # read -r var val 00:11:16.343 05:30:19 -- accel/accel.sh@21 -- # val= 00:11:16.343 05:30:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.343 05:30:19 -- accel/accel.sh@20 -- # IFS=: 00:11:16.343 05:30:19 -- accel/accel.sh@20 -- # read -r var val 00:11:16.343 05:30:19 -- accel/accel.sh@21 -- # val= 00:11:16.343 05:30:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.343 05:30:19 -- accel/accel.sh@20 -- # IFS=: 00:11:16.343 05:30:19 -- accel/accel.sh@20 -- # read -r var val 00:11:16.343 05:30:19 -- accel/accel.sh@21 -- # val= 00:11:16.343 05:30:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.343 05:30:19 -- accel/accel.sh@20 -- # IFS=: 00:11:16.343 05:30:19 -- accel/accel.sh@20 -- # read -r var val 00:11:16.343 05:30:19 -- accel/accel.sh@21 -- # val= 00:11:16.343 05:30:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.343 05:30:19 -- accel/accel.sh@20 -- # IFS=: 00:11:16.343 05:30:19 -- accel/accel.sh@20 -- # read -r var val 00:11:16.343 05:30:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:16.343 05:30:19 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:16.343 05:30:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:16.343 00:11:16.343 real 0m4.719s 00:11:16.343 user 0m4.227s 00:11:16.343 sys 0m0.318s 00:11:16.343 05:30:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:16.344 05:30:19 -- common/autotest_common.sh@10 -- # set +x 00:11:16.344 ************************************ 00:11:16.344 END TEST accel_decmop_full 00:11:16.344 ************************************ 00:11:16.344 05:30:19 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:16.344 05:30:19 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:16.344 05:30:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:16.344 05:30:19 -- common/autotest_common.sh@10 -- # set +x 00:11:16.344 ************************************ 00:11:16.344 START TEST accel_decomp_mcore 00:11:16.344 ************************************ 00:11:16.344 05:30:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:16.344 05:30:20 -- accel/accel.sh@16 -- # local accel_opc 00:11:16.344 05:30:20 -- accel/accel.sh@17 -- # local accel_module 00:11:16.344 05:30:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:16.344 05:30:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:16.344 05:30:20 -- accel/accel.sh@12 -- # build_accel_config 00:11:16.344 05:30:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:16.344 05:30:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:16.344 05:30:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:16.344 05:30:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:16.344 05:30:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:16.344 05:30:20 -- accel/accel.sh@41 -- # local IFS=, 00:11:16.344 05:30:20 -- accel/accel.sh@42 -- # jq -r . 00:11:16.344 [2024-10-07 05:30:20.040999] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:16.344 [2024-10-07 05:30:20.041355] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113990 ] 00:11:16.344 [2024-10-07 05:30:20.215906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:16.602 [2024-10-07 05:30:20.398274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.602 [2024-10-07 05:30:20.398410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.602 [2024-10-07 05:30:20.398589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.602 [2024-10-07 05:30:20.398594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:18.504 05:30:22 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:18.504 00:11:18.504 SPDK Configuration: 00:11:18.504 Core mask: 0xf 00:11:18.504 00:11:18.504 Accel Perf Configuration: 00:11:18.504 Workload Type: decompress 00:11:18.504 Transfer size: 4096 bytes 00:11:18.504 Vector count 1 00:11:18.504 Module: software 00:11:18.504 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:18.504 Queue depth: 32 00:11:18.504 Allocate depth: 32 00:11:18.504 # threads/core: 1 00:11:18.504 Run time: 1 seconds 00:11:18.504 Verify: Yes 00:11:18.504 00:11:18.504 Running for 1 seconds... 00:11:18.504 00:11:18.504 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:18.504 ------------------------------------------------------------------------------------ 00:11:18.504 0,0 49344/s 90 MiB/s 0 0 00:11:18.504 3,0 46464/s 85 MiB/s 0 0 00:11:18.504 2,0 45760/s 84 MiB/s 0 0 00:11:18.504 1,0 46368/s 85 MiB/s 0 0 00:11:18.504 ==================================================================================== 00:11:18.504 Total 187936/s 734 MiB/s 0 0' 00:11:18.504 05:30:22 -- accel/accel.sh@20 -- # IFS=: 00:11:18.504 05:30:22 -- accel/accel.sh@20 -- # read -r var val 00:11:18.504 05:30:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:18.504 05:30:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:18.504 05:30:22 -- accel/accel.sh@12 -- # build_accel_config 00:11:18.504 05:30:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:18.504 05:30:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:18.504 05:30:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:18.504 05:30:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:18.504 05:30:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:18.504 05:30:22 -- accel/accel.sh@41 -- # local IFS=, 00:11:18.504 05:30:22 -- accel/accel.sh@42 -- # jq -r . 00:11:18.504 [2024-10-07 05:30:22.455726] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:18.504 [2024-10-07 05:30:22.456448] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114932 ] 00:11:18.762 [2024-10-07 05:30:22.642030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:19.021 [2024-10-07 05:30:22.857705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.021 [2024-10-07 05:30:22.857780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:19.021 [2024-10-07 05:30:22.857923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.021 [2024-10-07 05:30:22.857933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:19.280 05:30:23 -- accel/accel.sh@21 -- # val= 00:11:19.280 05:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # IFS=: 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # read -r var val 00:11:19.280 05:30:23 -- accel/accel.sh@21 -- # val= 00:11:19.280 05:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # IFS=: 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # read -r var val 00:11:19.280 05:30:23 -- accel/accel.sh@21 -- # val= 00:11:19.280 05:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # IFS=: 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # read -r var val 00:11:19.280 05:30:23 -- accel/accel.sh@21 -- # val=0xf 00:11:19.280 05:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # IFS=: 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # read -r var val 00:11:19.280 05:30:23 -- accel/accel.sh@21 -- # val= 00:11:19.280 05:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # IFS=: 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # read -r var val 00:11:19.280 05:30:23 -- accel/accel.sh@21 -- # val= 00:11:19.280 05:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # IFS=: 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # read -r var val 00:11:19.280 05:30:23 -- accel/accel.sh@21 -- # val=decompress 00:11:19.280 05:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.280 05:30:23 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # IFS=: 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # read -r var val 00:11:19.280 05:30:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:19.280 05:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # IFS=: 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # read -r var val 00:11:19.280 05:30:23 -- accel/accel.sh@21 -- # val= 00:11:19.280 05:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # IFS=: 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # read -r var val 00:11:19.280 05:30:23 -- accel/accel.sh@21 -- # val=software 00:11:19.280 05:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.280 05:30:23 -- accel/accel.sh@23 -- # accel_module=software 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # IFS=: 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # read -r var val 00:11:19.280 05:30:23 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:19.280 05:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # IFS=: 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # read -r var val 00:11:19.280 05:30:23 -- accel/accel.sh@21 -- # val=32 00:11:19.280 05:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # IFS=: 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # read -r var val 00:11:19.280 05:30:23 -- accel/accel.sh@21 -- # val=32 00:11:19.280 05:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # IFS=: 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # read -r var val 00:11:19.280 05:30:23 -- accel/accel.sh@21 -- # val=1 00:11:19.280 05:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # IFS=: 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # read -r var val 00:11:19.280 05:30:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:19.280 05:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # IFS=: 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # read -r var val 00:11:19.280 05:30:23 -- accel/accel.sh@21 -- # val=Yes 00:11:19.280 05:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # IFS=: 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # read -r var val 00:11:19.280 05:30:23 -- accel/accel.sh@21 -- # val= 00:11:19.280 05:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # IFS=: 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # read -r var val 00:11:19.280 05:30:23 -- accel/accel.sh@21 -- # val= 00:11:19.280 05:30:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # IFS=: 00:11:19.280 05:30:23 -- accel/accel.sh@20 -- # read -r var val 00:11:21.250 05:30:24 -- accel/accel.sh@21 -- # val= 00:11:21.251 05:30:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.251 05:30:24 -- accel/accel.sh@20 -- # IFS=: 00:11:21.251 05:30:24 -- accel/accel.sh@20 -- # read -r var val 00:11:21.251 05:30:24 -- accel/accel.sh@21 -- # val= 00:11:21.251 05:30:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.251 05:30:24 -- accel/accel.sh@20 -- # IFS=: 00:11:21.251 05:30:24 -- accel/accel.sh@20 -- # read -r var val 00:11:21.251 05:30:24 -- accel/accel.sh@21 -- # val= 00:11:21.251 05:30:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.251 05:30:24 -- accel/accel.sh@20 -- # IFS=: 00:11:21.251 05:30:24 -- accel/accel.sh@20 -- # read -r var val 00:11:21.251 05:30:24 -- accel/accel.sh@21 -- # val= 00:11:21.251 05:30:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.251 05:30:24 -- accel/accel.sh@20 -- # IFS=: 00:11:21.251 05:30:24 -- accel/accel.sh@20 -- # read -r var val 00:11:21.251 05:30:24 -- accel/accel.sh@21 -- # val= 00:11:21.251 05:30:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.251 05:30:24 -- accel/accel.sh@20 -- # IFS=: 00:11:21.251 05:30:24 -- accel/accel.sh@20 -- # read -r var val 00:11:21.251 05:30:24 -- accel/accel.sh@21 -- # val= 00:11:21.251 05:30:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.251 05:30:24 -- accel/accel.sh@20 -- # IFS=: 00:11:21.251 05:30:24 -- accel/accel.sh@20 -- # read -r var val 00:11:21.251 05:30:24 -- accel/accel.sh@21 -- # val= 00:11:21.251 05:30:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.251 05:30:24 -- accel/accel.sh@20 -- # IFS=: 00:11:21.251 05:30:24 -- accel/accel.sh@20 -- # read -r var val 00:11:21.251 05:30:24 -- accel/accel.sh@21 -- # val= 00:11:21.251 05:30:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.251 05:30:24 -- accel/accel.sh@20 -- # IFS=: 00:11:21.251 05:30:24 -- accel/accel.sh@20 -- # read -r var val 00:11:21.251 05:30:24 -- accel/accel.sh@21 -- # val= 00:11:21.251 05:30:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.251 05:30:24 -- accel/accel.sh@20 -- # IFS=: 00:11:21.251 05:30:24 -- accel/accel.sh@20 -- # read -r var val 00:11:21.251 05:30:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:21.251 05:30:24 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:21.251 05:30:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:21.251 00:11:21.251 real 0m4.910s 00:11:21.251 user 0m14.365s 00:11:21.251 sys 0m0.432s 00:11:21.251 05:30:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:21.251 ************************************ 00:11:21.251 END TEST accel_decomp_mcore 00:11:21.251 ************************************ 00:11:21.251 05:30:24 -- common/autotest_common.sh@10 -- # set +x 00:11:21.251 05:30:24 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:21.251 05:30:24 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:21.251 05:30:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:21.251 05:30:24 -- common/autotest_common.sh@10 -- # set +x 00:11:21.251 ************************************ 00:11:21.251 START TEST accel_decomp_full_mcore 00:11:21.251 ************************************ 00:11:21.251 05:30:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:21.251 05:30:24 -- accel/accel.sh@16 -- # local accel_opc 00:11:21.251 05:30:24 -- accel/accel.sh@17 -- # local accel_module 00:11:21.251 05:30:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:21.251 05:30:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:21.251 05:30:24 -- accel/accel.sh@12 -- # build_accel_config 00:11:21.251 05:30:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:21.251 05:30:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:21.251 05:30:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:21.251 05:30:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:21.251 05:30:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:21.251 05:30:24 -- accel/accel.sh@41 -- # local IFS=, 00:11:21.251 05:30:24 -- accel/accel.sh@42 -- # jq -r . 00:11:21.251 [2024-10-07 05:30:25.011429] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:21.251 [2024-10-07 05:30:25.011662] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116211 ] 00:11:21.251 [2024-10-07 05:30:25.195572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:21.509 [2024-10-07 05:30:25.383090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.509 [2024-10-07 05:30:25.383283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.509 [2024-10-07 05:30:25.383227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.509 [2024-10-07 05:30:25.383284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:24.040 05:30:27 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:24.040 00:11:24.040 SPDK Configuration: 00:11:24.040 Core mask: 0xf 00:11:24.040 00:11:24.040 Accel Perf Configuration: 00:11:24.040 Workload Type: decompress 00:11:24.040 Transfer size: 111250 bytes 00:11:24.040 Vector count 1 00:11:24.040 Module: software 00:11:24.040 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:24.040 Queue depth: 32 00:11:24.040 Allocate depth: 32 00:11:24.040 # threads/core: 1 00:11:24.040 Run time: 1 seconds 00:11:24.040 Verify: Yes 00:11:24.040 00:11:24.040 Running for 1 seconds... 00:11:24.040 00:11:24.040 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:24.040 ------------------------------------------------------------------------------------ 00:11:24.040 0,0 4736/s 195 MiB/s 0 0 00:11:24.040 3,0 4864/s 200 MiB/s 0 0 00:11:24.040 2,0 4736/s 195 MiB/s 0 0 00:11:24.040 1,0 4512/s 186 MiB/s 0 0 00:11:24.040 ==================================================================================== 00:11:24.040 Total 18848/s 1999 MiB/s 0 0' 00:11:24.040 05:30:27 -- accel/accel.sh@20 -- # IFS=: 00:11:24.040 05:30:27 -- accel/accel.sh@20 -- # read -r var val 00:11:24.040 05:30:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:24.040 05:30:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:24.040 05:30:27 -- accel/accel.sh@12 -- # build_accel_config 00:11:24.040 05:30:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:24.040 05:30:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:24.040 05:30:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:24.040 05:30:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:24.040 05:30:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:24.040 05:30:27 -- accel/accel.sh@41 -- # local IFS=, 00:11:24.040 05:30:27 -- accel/accel.sh@42 -- # jq -r . 00:11:24.040 [2024-10-07 05:30:27.430937] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:24.040 [2024-10-07 05:30:27.431123] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116995 ] 00:11:24.040 [2024-10-07 05:30:27.604364] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:24.040 [2024-10-07 05:30:27.840817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.040 [2024-10-07 05:30:27.840909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:24.040 [2024-10-07 05:30:27.841062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.040 [2024-10-07 05:30:27.841065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:24.298 05:30:28 -- accel/accel.sh@21 -- # val= 00:11:24.298 05:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # IFS=: 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # read -r var val 00:11:24.298 05:30:28 -- accel/accel.sh@21 -- # val= 00:11:24.298 05:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # IFS=: 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # read -r var val 00:11:24.298 05:30:28 -- accel/accel.sh@21 -- # val= 00:11:24.298 05:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # IFS=: 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # read -r var val 00:11:24.298 05:30:28 -- accel/accel.sh@21 -- # val=0xf 00:11:24.298 05:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # IFS=: 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # read -r var val 00:11:24.298 05:30:28 -- accel/accel.sh@21 -- # val= 00:11:24.298 05:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # IFS=: 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # read -r var val 00:11:24.298 05:30:28 -- accel/accel.sh@21 -- # val= 00:11:24.298 05:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # IFS=: 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # read -r var val 00:11:24.298 05:30:28 -- accel/accel.sh@21 -- # val=decompress 00:11:24.298 05:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.298 05:30:28 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # IFS=: 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # read -r var val 00:11:24.298 05:30:28 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:24.298 05:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # IFS=: 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # read -r var val 00:11:24.298 05:30:28 -- accel/accel.sh@21 -- # val= 00:11:24.298 05:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # IFS=: 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # read -r var val 00:11:24.298 05:30:28 -- accel/accel.sh@21 -- # val=software 00:11:24.298 05:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.298 05:30:28 -- accel/accel.sh@23 -- # accel_module=software 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # IFS=: 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # read -r var val 00:11:24.298 05:30:28 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:24.298 05:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # IFS=: 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # read -r var val 00:11:24.298 05:30:28 -- accel/accel.sh@21 -- # val=32 00:11:24.298 05:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # IFS=: 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # read -r var val 00:11:24.298 05:30:28 -- accel/accel.sh@21 -- # val=32 00:11:24.298 05:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # IFS=: 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # read -r var val 00:11:24.298 05:30:28 -- accel/accel.sh@21 -- # val=1 00:11:24.298 05:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # IFS=: 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # read -r var val 00:11:24.298 05:30:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:24.298 05:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # IFS=: 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # read -r var val 00:11:24.298 05:30:28 -- accel/accel.sh@21 -- # val=Yes 00:11:24.298 05:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # IFS=: 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # read -r var val 00:11:24.298 05:30:28 -- accel/accel.sh@21 -- # val= 00:11:24.298 05:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # IFS=: 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # read -r var val 00:11:24.298 05:30:28 -- accel/accel.sh@21 -- # val= 00:11:24.298 05:30:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # IFS=: 00:11:24.298 05:30:28 -- accel/accel.sh@20 -- # read -r var val 00:11:26.199 05:30:29 -- accel/accel.sh@21 -- # val= 00:11:26.199 05:30:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.199 05:30:29 -- accel/accel.sh@20 -- # IFS=: 00:11:26.199 05:30:29 -- accel/accel.sh@20 -- # read -r var val 00:11:26.199 05:30:29 -- accel/accel.sh@21 -- # val= 00:11:26.199 05:30:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.199 05:30:29 -- accel/accel.sh@20 -- # IFS=: 00:11:26.199 05:30:29 -- accel/accel.sh@20 -- # read -r var val 00:11:26.199 05:30:29 -- accel/accel.sh@21 -- # val= 00:11:26.199 05:30:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.199 05:30:29 -- accel/accel.sh@20 -- # IFS=: 00:11:26.199 05:30:29 -- accel/accel.sh@20 -- # read -r var val 00:11:26.199 05:30:29 -- accel/accel.sh@21 -- # val= 00:11:26.199 05:30:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.199 05:30:29 -- accel/accel.sh@20 -- # IFS=: 00:11:26.199 05:30:29 -- accel/accel.sh@20 -- # read -r var val 00:11:26.199 05:30:29 -- accel/accel.sh@21 -- # val= 00:11:26.199 05:30:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.199 05:30:29 -- accel/accel.sh@20 -- # IFS=: 00:11:26.199 05:30:29 -- accel/accel.sh@20 -- # read -r var val 00:11:26.199 05:30:29 -- accel/accel.sh@21 -- # val= 00:11:26.199 05:30:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.199 05:30:29 -- accel/accel.sh@20 -- # IFS=: 00:11:26.199 05:30:29 -- accel/accel.sh@20 -- # read -r var val 00:11:26.199 05:30:29 -- accel/accel.sh@21 -- # val= 00:11:26.200 05:30:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.200 05:30:29 -- accel/accel.sh@20 -- # IFS=: 00:11:26.200 05:30:29 -- accel/accel.sh@20 -- # read -r var val 00:11:26.200 05:30:29 -- accel/accel.sh@21 -- # val= 00:11:26.200 05:30:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.200 05:30:29 -- accel/accel.sh@20 -- # IFS=: 00:11:26.200 05:30:29 -- accel/accel.sh@20 -- # read -r var val 00:11:26.200 05:30:29 -- accel/accel.sh@21 -- # val= 00:11:26.200 05:30:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.200 05:30:29 -- accel/accel.sh@20 -- # IFS=: 00:11:26.200 05:30:29 -- accel/accel.sh@20 -- # read -r var val 00:11:26.200 05:30:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:26.200 05:30:29 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:26.200 05:30:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:26.200 00:11:26.200 real 0m4.944s 00:11:26.200 user 0m14.543s 00:11:26.200 sys 0m0.359s 00:11:26.200 05:30:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:26.200 ************************************ 00:11:26.200 END TEST accel_decomp_full_mcore 00:11:26.200 ************************************ 00:11:26.200 05:30:29 -- common/autotest_common.sh@10 -- # set +x 00:11:26.200 05:30:29 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:26.200 05:30:29 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:26.200 05:30:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:26.200 05:30:29 -- common/autotest_common.sh@10 -- # set +x 00:11:26.200 ************************************ 00:11:26.200 START TEST accel_decomp_mthread 00:11:26.200 ************************************ 00:11:26.200 05:30:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:26.200 05:30:29 -- accel/accel.sh@16 -- # local accel_opc 00:11:26.200 05:30:29 -- accel/accel.sh@17 -- # local accel_module 00:11:26.200 05:30:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:26.200 05:30:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:26.200 05:30:29 -- accel/accel.sh@12 -- # build_accel_config 00:11:26.200 05:30:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:26.200 05:30:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:26.200 05:30:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:26.200 05:30:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:26.200 05:30:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:26.200 05:30:29 -- accel/accel.sh@41 -- # local IFS=, 00:11:26.200 05:30:29 -- accel/accel.sh@42 -- # jq -r . 00:11:26.200 [2024-10-07 05:30:29.999608] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:26.200 [2024-10-07 05:30:30.000361] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117721 ] 00:11:26.200 [2024-10-07 05:30:30.162256] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.458 [2024-10-07 05:30:30.347500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.360 05:30:32 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:28.360 00:11:28.360 SPDK Configuration: 00:11:28.360 Core mask: 0x1 00:11:28.360 00:11:28.360 Accel Perf Configuration: 00:11:28.360 Workload Type: decompress 00:11:28.360 Transfer size: 4096 bytes 00:11:28.360 Vector count 1 00:11:28.360 Module: software 00:11:28.361 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:28.361 Queue depth: 32 00:11:28.361 Allocate depth: 32 00:11:28.361 # threads/core: 2 00:11:28.361 Run time: 1 seconds 00:11:28.361 Verify: Yes 00:11:28.361 00:11:28.361 Running for 1 seconds... 00:11:28.361 00:11:28.361 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:28.361 ------------------------------------------------------------------------------------ 00:11:28.361 0,1 35840/s 66 MiB/s 0 0 00:11:28.361 0,0 35744/s 65 MiB/s 0 0 00:11:28.361 ==================================================================================== 00:11:28.361 Total 71584/s 279 MiB/s 0 0' 00:11:28.361 05:30:32 -- accel/accel.sh@20 -- # IFS=: 00:11:28.361 05:30:32 -- accel/accel.sh@20 -- # read -r var val 00:11:28.361 05:30:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:28.361 05:30:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:28.361 05:30:32 -- accel/accel.sh@12 -- # build_accel_config 00:11:28.361 05:30:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:28.361 05:30:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:28.361 05:30:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:28.361 05:30:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:28.361 05:30:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:28.361 05:30:32 -- accel/accel.sh@41 -- # local IFS=, 00:11:28.361 05:30:32 -- accel/accel.sh@42 -- # jq -r . 00:11:28.361 [2024-10-07 05:30:32.290741] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:28.361 [2024-10-07 05:30:32.290950] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117870 ] 00:11:28.619 [2024-10-07 05:30:32.459354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.877 [2024-10-07 05:30:32.651219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.137 05:30:32 -- accel/accel.sh@21 -- # val= 00:11:29.137 05:30:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # IFS=: 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # read -r var val 00:11:29.137 05:30:32 -- accel/accel.sh@21 -- # val= 00:11:29.137 05:30:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # IFS=: 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # read -r var val 00:11:29.137 05:30:32 -- accel/accel.sh@21 -- # val= 00:11:29.137 05:30:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # IFS=: 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # read -r var val 00:11:29.137 05:30:32 -- accel/accel.sh@21 -- # val=0x1 00:11:29.137 05:30:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # IFS=: 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # read -r var val 00:11:29.137 05:30:32 -- accel/accel.sh@21 -- # val= 00:11:29.137 05:30:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # IFS=: 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # read -r var val 00:11:29.137 05:30:32 -- accel/accel.sh@21 -- # val= 00:11:29.137 05:30:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # IFS=: 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # read -r var val 00:11:29.137 05:30:32 -- accel/accel.sh@21 -- # val=decompress 00:11:29.137 05:30:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.137 05:30:32 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # IFS=: 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # read -r var val 00:11:29.137 05:30:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:29.137 05:30:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # IFS=: 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # read -r var val 00:11:29.137 05:30:32 -- accel/accel.sh@21 -- # val= 00:11:29.137 05:30:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # IFS=: 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # read -r var val 00:11:29.137 05:30:32 -- accel/accel.sh@21 -- # val=software 00:11:29.137 05:30:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.137 05:30:32 -- accel/accel.sh@23 -- # accel_module=software 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # IFS=: 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # read -r var val 00:11:29.137 05:30:32 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:29.137 05:30:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # IFS=: 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # read -r var val 00:11:29.137 05:30:32 -- accel/accel.sh@21 -- # val=32 00:11:29.137 05:30:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # IFS=: 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # read -r var val 00:11:29.137 05:30:32 -- accel/accel.sh@21 -- # val=32 00:11:29.137 05:30:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # IFS=: 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # read -r var val 00:11:29.137 05:30:32 -- accel/accel.sh@21 -- # val=2 00:11:29.137 05:30:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # IFS=: 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # read -r var val 00:11:29.137 05:30:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:29.137 05:30:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # IFS=: 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # read -r var val 00:11:29.137 05:30:32 -- accel/accel.sh@21 -- # val=Yes 00:11:29.137 05:30:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # IFS=: 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # read -r var val 00:11:29.137 05:30:32 -- accel/accel.sh@21 -- # val= 00:11:29.137 05:30:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # IFS=: 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # read -r var val 00:11:29.137 05:30:32 -- accel/accel.sh@21 -- # val= 00:11:29.137 05:30:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # IFS=: 00:11:29.137 05:30:32 -- accel/accel.sh@20 -- # read -r var val 00:11:31.037 05:30:34 -- accel/accel.sh@21 -- # val= 00:11:31.037 05:30:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.037 05:30:34 -- accel/accel.sh@20 -- # IFS=: 00:11:31.037 05:30:34 -- accel/accel.sh@20 -- # read -r var val 00:11:31.037 05:30:34 -- accel/accel.sh@21 -- # val= 00:11:31.037 05:30:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.037 05:30:34 -- accel/accel.sh@20 -- # IFS=: 00:11:31.037 05:30:34 -- accel/accel.sh@20 -- # read -r var val 00:11:31.037 05:30:34 -- accel/accel.sh@21 -- # val= 00:11:31.037 05:30:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.037 05:30:34 -- accel/accel.sh@20 -- # IFS=: 00:11:31.037 05:30:34 -- accel/accel.sh@20 -- # read -r var val 00:11:31.037 05:30:34 -- accel/accel.sh@21 -- # val= 00:11:31.037 05:30:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.037 05:30:34 -- accel/accel.sh@20 -- # IFS=: 00:11:31.037 05:30:34 -- accel/accel.sh@20 -- # read -r var val 00:11:31.037 05:30:34 -- accel/accel.sh@21 -- # val= 00:11:31.037 05:30:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.038 05:30:34 -- accel/accel.sh@20 -- # IFS=: 00:11:31.038 05:30:34 -- accel/accel.sh@20 -- # read -r var val 00:11:31.038 05:30:34 -- accel/accel.sh@21 -- # val= 00:11:31.038 05:30:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.038 05:30:34 -- accel/accel.sh@20 -- # IFS=: 00:11:31.038 05:30:34 -- accel/accel.sh@20 -- # read -r var val 00:11:31.038 05:30:34 -- accel/accel.sh@21 -- # val= 00:11:31.038 05:30:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.038 05:30:34 -- accel/accel.sh@20 -- # IFS=: 00:11:31.038 05:30:34 -- accel/accel.sh@20 -- # read -r var val 00:11:31.038 05:30:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:31.038 05:30:34 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:31.038 05:30:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:31.038 00:11:31.038 real 0m4.636s 00:11:31.038 user 0m4.098s 00:11:31.038 sys 0m0.361s 00:11:31.038 05:30:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:31.038 05:30:34 -- common/autotest_common.sh@10 -- # set +x 00:11:31.038 ************************************ 00:11:31.038 END TEST accel_decomp_mthread 00:11:31.038 ************************************ 00:11:31.038 05:30:34 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:31.038 05:30:34 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:31.038 05:30:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:31.038 05:30:34 -- common/autotest_common.sh@10 -- # set +x 00:11:31.038 ************************************ 00:11:31.038 START TEST accel_deomp_full_mthread 00:11:31.038 ************************************ 00:11:31.038 05:30:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:31.038 05:30:34 -- accel/accel.sh@16 -- # local accel_opc 00:11:31.038 05:30:34 -- accel/accel.sh@17 -- # local accel_module 00:11:31.038 05:30:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:31.038 05:30:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:31.038 05:30:34 -- accel/accel.sh@12 -- # build_accel_config 00:11:31.038 05:30:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:31.038 05:30:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:31.038 05:30:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:31.038 05:30:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:31.038 05:30:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:31.038 05:30:34 -- accel/accel.sh@41 -- # local IFS=, 00:11:31.038 05:30:34 -- accel/accel.sh@42 -- # jq -r . 00:11:31.038 [2024-10-07 05:30:34.684781] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:31.038 [2024-10-07 05:30:34.685487] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117955 ] 00:11:31.038 [2024-10-07 05:30:34.854840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.296 [2024-10-07 05:30:35.045734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.220 05:30:36 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:33.220 00:11:33.220 SPDK Configuration: 00:11:33.220 Core mask: 0x1 00:11:33.220 00:11:33.220 Accel Perf Configuration: 00:11:33.220 Workload Type: decompress 00:11:33.220 Transfer size: 111250 bytes 00:11:33.220 Vector count 1 00:11:33.220 Module: software 00:11:33.220 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:33.220 Queue depth: 32 00:11:33.220 Allocate depth: 32 00:11:33.220 # threads/core: 2 00:11:33.220 Run time: 1 seconds 00:11:33.220 Verify: Yes 00:11:33.220 00:11:33.220 Running for 1 seconds... 00:11:33.220 00:11:33.220 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:33.220 ------------------------------------------------------------------------------------ 00:11:33.220 0,1 2720/s 112 MiB/s 0 0 00:11:33.220 0,0 2720/s 112 MiB/s 0 0 00:11:33.220 ==================================================================================== 00:11:33.220 Total 5440/s 577 MiB/s 0 0' 00:11:33.220 05:30:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:33.220 05:30:36 -- accel/accel.sh@20 -- # IFS=: 00:11:33.220 05:30:36 -- accel/accel.sh@20 -- # read -r var val 00:11:33.220 05:30:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:33.220 05:30:36 -- accel/accel.sh@12 -- # build_accel_config 00:11:33.220 05:30:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:33.220 05:30:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:33.220 05:30:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:33.220 05:30:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:33.220 05:30:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:33.220 05:30:36 -- accel/accel.sh@41 -- # local IFS=, 00:11:33.220 05:30:36 -- accel/accel.sh@42 -- # jq -r . 00:11:33.220 [2024-10-07 05:30:36.991988] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:33.220 [2024-10-07 05:30:36.992189] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118049 ] 00:11:33.220 [2024-10-07 05:30:37.159980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.479 [2024-10-07 05:30:37.355463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.737 05:30:37 -- accel/accel.sh@21 -- # val= 00:11:33.737 05:30:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.737 05:30:37 -- accel/accel.sh@20 -- # IFS=: 00:11:33.737 05:30:37 -- accel/accel.sh@20 -- # read -r var val 00:11:33.737 05:30:37 -- accel/accel.sh@21 -- # val= 00:11:33.737 05:30:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.737 05:30:37 -- accel/accel.sh@20 -- # IFS=: 00:11:33.737 05:30:37 -- accel/accel.sh@20 -- # read -r var val 00:11:33.737 05:30:37 -- accel/accel.sh@21 -- # val= 00:11:33.737 05:30:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.737 05:30:37 -- accel/accel.sh@20 -- # IFS=: 00:11:33.737 05:30:37 -- accel/accel.sh@20 -- # read -r var val 00:11:33.737 05:30:37 -- accel/accel.sh@21 -- # val=0x1 00:11:33.737 05:30:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.737 05:30:37 -- accel/accel.sh@20 -- # IFS=: 00:11:33.737 05:30:37 -- accel/accel.sh@20 -- # read -r var val 00:11:33.737 05:30:37 -- accel/accel.sh@21 -- # val= 00:11:33.737 05:30:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.737 05:30:37 -- accel/accel.sh@20 -- # IFS=: 00:11:33.737 05:30:37 -- accel/accel.sh@20 -- # read -r var val 00:11:33.737 05:30:37 -- accel/accel.sh@21 -- # val= 00:11:33.737 05:30:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.737 05:30:37 -- accel/accel.sh@20 -- # IFS=: 00:11:33.737 05:30:37 -- accel/accel.sh@20 -- # read -r var val 00:11:33.737 05:30:37 -- accel/accel.sh@21 -- # val=decompress 00:11:33.737 05:30:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.737 05:30:37 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:33.737 05:30:37 -- accel/accel.sh@20 -- # IFS=: 00:11:33.737 05:30:37 -- accel/accel.sh@20 -- # read -r var val 00:11:33.737 05:30:37 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:33.737 05:30:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.737 05:30:37 -- accel/accel.sh@20 -- # IFS=: 00:11:33.737 05:30:37 -- accel/accel.sh@20 -- # read -r var val 00:11:33.737 05:30:37 -- accel/accel.sh@21 -- # val= 00:11:33.737 05:30:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.737 05:30:37 -- accel/accel.sh@20 -- # IFS=: 00:11:33.737 05:30:37 -- accel/accel.sh@20 -- # read -r var val 00:11:33.737 05:30:37 -- accel/accel.sh@21 -- # val=software 00:11:33.737 05:30:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.737 05:30:37 -- accel/accel.sh@23 -- # accel_module=software 00:11:33.737 05:30:37 -- accel/accel.sh@20 -- # IFS=: 00:11:33.737 05:30:37 -- accel/accel.sh@20 -- # read -r var val 00:11:33.737 05:30:37 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:33.737 05:30:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.738 05:30:37 -- accel/accel.sh@20 -- # IFS=: 00:11:33.738 05:30:37 -- accel/accel.sh@20 -- # read -r var val 00:11:33.738 05:30:37 -- accel/accel.sh@21 -- # val=32 00:11:33.738 05:30:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.738 05:30:37 -- accel/accel.sh@20 -- # IFS=: 00:11:33.738 05:30:37 -- accel/accel.sh@20 -- # read -r var val 00:11:33.738 05:30:37 -- accel/accel.sh@21 -- # val=32 00:11:33.738 05:30:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.738 05:30:37 -- accel/accel.sh@20 -- # IFS=: 00:11:33.738 05:30:37 -- accel/accel.sh@20 -- # read -r var val 00:11:33.738 05:30:37 -- accel/accel.sh@21 -- # val=2 00:11:33.738 05:30:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.738 05:30:37 -- accel/accel.sh@20 -- # IFS=: 00:11:33.738 05:30:37 -- accel/accel.sh@20 -- # read -r var val 00:11:33.738 05:30:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:33.738 05:30:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.738 05:30:37 -- accel/accel.sh@20 -- # IFS=: 00:11:33.738 05:30:37 -- accel/accel.sh@20 -- # read -r var val 00:11:33.738 05:30:37 -- accel/accel.sh@21 -- # val=Yes 00:11:33.738 05:30:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.738 05:30:37 -- accel/accel.sh@20 -- # IFS=: 00:11:33.738 05:30:37 -- accel/accel.sh@20 -- # read -r var val 00:11:33.738 05:30:37 -- accel/accel.sh@21 -- # val= 00:11:33.738 05:30:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.738 05:30:37 -- accel/accel.sh@20 -- # IFS=: 00:11:33.738 05:30:37 -- accel/accel.sh@20 -- # read -r var val 00:11:33.738 05:30:37 -- accel/accel.sh@21 -- # val= 00:11:33.738 05:30:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.738 05:30:37 -- accel/accel.sh@20 -- # IFS=: 00:11:33.738 05:30:37 -- accel/accel.sh@20 -- # read -r var val 00:11:35.638 05:30:39 -- accel/accel.sh@21 -- # val= 00:11:35.638 05:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.638 05:30:39 -- accel/accel.sh@20 -- # IFS=: 00:11:35.638 05:30:39 -- accel/accel.sh@20 -- # read -r var val 00:11:35.638 05:30:39 -- accel/accel.sh@21 -- # val= 00:11:35.638 05:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.638 05:30:39 -- accel/accel.sh@20 -- # IFS=: 00:11:35.638 05:30:39 -- accel/accel.sh@20 -- # read -r var val 00:11:35.638 05:30:39 -- accel/accel.sh@21 -- # val= 00:11:35.638 05:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.638 05:30:39 -- accel/accel.sh@20 -- # IFS=: 00:11:35.638 05:30:39 -- accel/accel.sh@20 -- # read -r var val 00:11:35.638 05:30:39 -- accel/accel.sh@21 -- # val= 00:11:35.638 05:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.638 05:30:39 -- accel/accel.sh@20 -- # IFS=: 00:11:35.638 05:30:39 -- accel/accel.sh@20 -- # read -r var val 00:11:35.638 05:30:39 -- accel/accel.sh@21 -- # val= 00:11:35.638 05:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.638 05:30:39 -- accel/accel.sh@20 -- # IFS=: 00:11:35.639 05:30:39 -- accel/accel.sh@20 -- # read -r var val 00:11:35.639 05:30:39 -- accel/accel.sh@21 -- # val= 00:11:35.639 05:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.639 05:30:39 -- accel/accel.sh@20 -- # IFS=: 00:11:35.639 05:30:39 -- accel/accel.sh@20 -- # read -r var val 00:11:35.639 05:30:39 -- accel/accel.sh@21 -- # val= 00:11:35.639 05:30:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.639 05:30:39 -- accel/accel.sh@20 -- # IFS=: 00:11:35.639 05:30:39 -- accel/accel.sh@20 -- # read -r var val 00:11:35.639 05:30:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:35.639 05:30:39 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:35.639 05:30:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:35.639 00:11:35.639 real 0m4.666s 00:11:35.639 user 0m4.142s 00:11:35.639 sys 0m0.360s 00:11:35.639 05:30:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:35.639 ************************************ 00:11:35.639 END TEST accel_deomp_full_mthread 00:11:35.639 05:30:39 -- common/autotest_common.sh@10 -- # set +x 00:11:35.639 ************************************ 00:11:35.639 05:30:39 -- accel/accel.sh@116 -- # [[ n == y ]] 00:11:35.639 05:30:39 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:35.639 05:30:39 -- accel/accel.sh@129 -- # build_accel_config 00:11:35.639 05:30:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:35.639 05:30:39 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:35.639 05:30:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:35.639 05:30:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:35.639 05:30:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:35.639 05:30:39 -- common/autotest_common.sh@10 -- # set +x 00:11:35.639 05:30:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:35.639 05:30:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:35.639 05:30:39 -- accel/accel.sh@41 -- # local IFS=, 00:11:35.639 05:30:39 -- accel/accel.sh@42 -- # jq -r . 00:11:35.639 ************************************ 00:11:35.639 START TEST accel_dif_functional_tests 00:11:35.639 ************************************ 00:11:35.639 05:30:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:35.639 [2024-10-07 05:30:39.431265] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:35.639 [2024-10-07 05:30:39.431459] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118236 ] 00:11:35.639 [2024-10-07 05:30:39.611397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:35.897 [2024-10-07 05:30:39.795310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.897 [2024-10-07 05:30:39.795450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.897 [2024-10-07 05:30:39.795459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.156 00:11:36.156 00:11:36.156 CUnit - A unit testing framework for C - Version 2.1-3 00:11:36.156 http://cunit.sourceforge.net/ 00:11:36.156 00:11:36.156 00:11:36.156 Suite: accel_dif 00:11:36.156 Test: verify: DIF generated, GUARD check ...passed 00:11:36.156 Test: verify: DIF generated, APPTAG check ...passed 00:11:36.156 Test: verify: DIF generated, REFTAG check ...passed 00:11:36.156 Test: verify: DIF not generated, GUARD check ...[2024-10-07 05:30:40.078863] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:36.156 passed 00:11:36.156 Test: verify: DIF not generated, APPTAG check ...[2024-10-07 05:30:40.079064] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:36.156 [2024-10-07 05:30:40.079143] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:36.156 [2024-10-07 05:30:40.079199] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:36.156 passed 00:11:36.156 Test: verify: DIF not generated, REFTAG check ...[2024-10-07 05:30:40.079269] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:36.156 [2024-10-07 05:30:40.079335] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:36.156 passed 00:11:36.156 Test: verify: APPTAG correct, APPTAG check ...passed 00:11:36.156 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:11:36.156 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:11:36.156 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:11:36.156 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:11:36.156 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:11:36.156 Test: generate copy: DIF generated, GUARD check ...passed 00:11:36.156 Test: generate copy: DIF generated, APTTAG check ...passed 00:11:36.156 Test: generate copy: DIF generated, REFTAG check ...passed 00:11:36.156 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:11:36.156 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:11:36.156 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:11:36.156 Test: generate copy: iovecs-len validate ...passed 00:11:36.156 Test: generate copy: buffer alignment validate ...passed 00:11:36.156 00:11:36.156 Run Summary: Type Total Ran Passed Failed Inactive 00:11:36.156 suites 1 1 n/a 0 0 00:11:36.156 tests 20 20 20 0 0 00:11:36.157 asserts 204 204 204 0 n/a 00:11:36.157 00:11:36.157 Elapsed time = 0.005 seconds 00:11:36.157 [2024-10-07 05:30:40.079491] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:11:36.157 [2024-10-07 05:30:40.079732] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:11:36.157 [2024-10-07 05:30:40.080213] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:11:37.092 00:11:37.092 real 0m1.705s 00:11:37.092 user 0m3.224s 00:11:37.092 sys 0m0.259s 00:11:37.092 05:30:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:37.092 ************************************ 00:11:37.092 END TEST accel_dif_functional_tests 00:11:37.092 ************************************ 00:11:37.092 05:30:41 -- common/autotest_common.sh@10 -- # set +x 00:11:37.351 ************************************ 00:11:37.351 END TEST accel 00:11:37.351 ************************************ 00:11:37.351 00:11:37.351 real 1m42.642s 00:11:37.351 user 1m53.086s 00:11:37.351 sys 0m8.522s 00:11:37.351 05:30:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:37.351 05:30:41 -- common/autotest_common.sh@10 -- # set +x 00:11:37.351 05:30:41 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:37.351 05:30:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:37.351 05:30:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:37.351 05:30:41 -- common/autotest_common.sh@10 -- # set +x 00:11:37.351 ************************************ 00:11:37.351 START TEST accel_rpc 00:11:37.351 ************************************ 00:11:37.351 05:30:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:37.351 * Looking for test storage... 00:11:37.351 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:11:37.351 05:30:41 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:37.351 05:30:41 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=118375 00:11:37.351 05:30:41 -- accel/accel_rpc.sh@15 -- # waitforlisten 118375 00:11:37.351 05:30:41 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:11:37.351 05:30:41 -- common/autotest_common.sh@819 -- # '[' -z 118375 ']' 00:11:37.351 05:30:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.351 05:30:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:37.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.351 05:30:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.351 05:30:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:37.351 05:30:41 -- common/autotest_common.sh@10 -- # set +x 00:11:37.351 [2024-10-07 05:30:41.299930] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:37.351 [2024-10-07 05:30:41.300091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118375 ] 00:11:37.609 [2024-10-07 05:30:41.458943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.867 [2024-10-07 05:30:41.643210] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:37.867 [2024-10-07 05:30:41.643402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.434 05:30:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:38.434 05:30:42 -- common/autotest_common.sh@852 -- # return 0 00:11:38.434 05:30:42 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:11:38.434 05:30:42 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:11:38.434 05:30:42 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:11:38.434 05:30:42 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:11:38.434 05:30:42 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:11:38.434 05:30:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:38.434 05:30:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:38.434 05:30:42 -- common/autotest_common.sh@10 -- # set +x 00:11:38.434 ************************************ 00:11:38.434 START TEST accel_assign_opcode 00:11:38.435 ************************************ 00:11:38.435 05:30:42 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:11:38.435 05:30:42 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:11:38.435 05:30:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:38.435 05:30:42 -- common/autotest_common.sh@10 -- # set +x 00:11:38.435 [2024-10-07 05:30:42.288116] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:11:38.435 05:30:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:38.435 05:30:42 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:11:38.435 05:30:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:38.435 05:30:42 -- common/autotest_common.sh@10 -- # set +x 00:11:38.435 [2024-10-07 05:30:42.296091] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:11:38.435 05:30:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:38.435 05:30:42 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:11:38.435 05:30:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:38.435 05:30:42 -- common/autotest_common.sh@10 -- # set +x 00:11:39.002 05:30:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.002 05:30:42 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:11:39.002 05:30:42 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:11:39.002 05:30:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:39.002 05:30:42 -- common/autotest_common.sh@10 -- # set +x 00:11:39.002 05:30:42 -- accel/accel_rpc.sh@42 -- # grep software 00:11:39.002 05:30:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:39.260 software 00:11:39.260 00:11:39.260 real 0m0.736s 00:11:39.260 user 0m0.074s 00:11:39.260 sys 0m0.008s 00:11:39.260 05:30:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.260 05:30:43 -- common/autotest_common.sh@10 -- # set +x 00:11:39.260 ************************************ 00:11:39.260 END TEST accel_assign_opcode 00:11:39.260 ************************************ 00:11:39.260 05:30:43 -- accel/accel_rpc.sh@55 -- # killprocess 118375 00:11:39.260 05:30:43 -- common/autotest_common.sh@926 -- # '[' -z 118375 ']' 00:11:39.260 05:30:43 -- common/autotest_common.sh@930 -- # kill -0 118375 00:11:39.260 05:30:43 -- common/autotest_common.sh@931 -- # uname 00:11:39.260 05:30:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:39.260 05:30:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118375 00:11:39.260 05:30:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:39.260 05:30:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:39.260 05:30:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118375' 00:11:39.260 killing process with pid 118375 00:11:39.260 05:30:43 -- common/autotest_common.sh@945 -- # kill 118375 00:11:39.260 05:30:43 -- common/autotest_common.sh@950 -- # wait 118375 00:11:41.162 00:11:41.162 real 0m3.744s 00:11:41.162 user 0m3.833s 00:11:41.162 sys 0m0.491s 00:11:41.162 05:30:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:41.162 05:30:44 -- common/autotest_common.sh@10 -- # set +x 00:11:41.162 ************************************ 00:11:41.162 END TEST accel_rpc 00:11:41.162 ************************************ 00:11:41.162 05:30:44 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:41.162 05:30:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:41.162 05:30:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:41.163 05:30:44 -- common/autotest_common.sh@10 -- # set +x 00:11:41.163 ************************************ 00:11:41.163 START TEST app_cmdline 00:11:41.163 ************************************ 00:11:41.163 05:30:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:41.163 * Looking for test storage... 00:11:41.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:41.163 05:30:45 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:41.163 05:30:45 -- app/cmdline.sh@17 -- # spdk_tgt_pid=118600 00:11:41.163 05:30:45 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:41.163 05:30:45 -- app/cmdline.sh@18 -- # waitforlisten 118600 00:11:41.163 05:30:45 -- common/autotest_common.sh@819 -- # '[' -z 118600 ']' 00:11:41.163 05:30:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.163 05:30:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:41.163 05:30:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.163 05:30:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:41.163 05:30:45 -- common/autotest_common.sh@10 -- # set +x 00:11:41.163 [2024-10-07 05:30:45.102627] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:41.163 [2024-10-07 05:30:45.102816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118600 ] 00:11:41.421 [2024-10-07 05:30:45.266572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.679 [2024-10-07 05:30:45.442521] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:41.679 [2024-10-07 05:30:45.442763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.056 05:30:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:43.056 05:30:46 -- common/autotest_common.sh@852 -- # return 0 00:11:43.056 05:30:46 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:11:43.056 { 00:11:43.056 "version": "SPDK v24.01.1-pre git sha1 726a04d70", 00:11:43.056 "fields": { 00:11:43.056 "major": 24, 00:11:43.056 "minor": 1, 00:11:43.056 "patch": 1, 00:11:43.056 "suffix": "-pre", 00:11:43.056 "commit": "726a04d70" 00:11:43.056 } 00:11:43.056 } 00:11:43.056 05:30:46 -- app/cmdline.sh@22 -- # expected_methods=() 00:11:43.056 05:30:46 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:43.056 05:30:46 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:43.056 05:30:46 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:43.056 05:30:46 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:43.056 05:30:46 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:43.056 05:30:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:43.056 05:30:46 -- app/cmdline.sh@26 -- # sort 00:11:43.056 05:30:46 -- common/autotest_common.sh@10 -- # set +x 00:11:43.056 05:30:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:43.056 05:30:46 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:43.056 05:30:46 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:43.056 05:30:46 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:43.056 05:30:46 -- common/autotest_common.sh@640 -- # local es=0 00:11:43.056 05:30:46 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:43.056 05:30:46 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:43.056 05:30:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:43.056 05:30:46 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:43.056 05:30:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:43.056 05:30:46 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:43.056 05:30:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:43.056 05:30:46 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:43.056 05:30:46 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:43.056 05:30:46 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:43.315 request: 00:11:43.315 { 00:11:43.315 "method": "env_dpdk_get_mem_stats", 00:11:43.315 "req_id": 1 00:11:43.315 } 00:11:43.315 Got JSON-RPC error response 00:11:43.315 response: 00:11:43.315 { 00:11:43.315 "code": -32601, 00:11:43.315 "message": "Method not found" 00:11:43.315 } 00:11:43.315 05:30:47 -- common/autotest_common.sh@643 -- # es=1 00:11:43.315 05:30:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:43.315 05:30:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:43.315 05:30:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:43.315 05:30:47 -- app/cmdline.sh@1 -- # killprocess 118600 00:11:43.315 05:30:47 -- common/autotest_common.sh@926 -- # '[' -z 118600 ']' 00:11:43.315 05:30:47 -- common/autotest_common.sh@930 -- # kill -0 118600 00:11:43.315 05:30:47 -- common/autotest_common.sh@931 -- # uname 00:11:43.315 05:30:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:43.315 05:30:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118600 00:11:43.315 05:30:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:43.315 05:30:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:43.315 killing process with pid 118600 00:11:43.315 05:30:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118600' 00:11:43.315 05:30:47 -- common/autotest_common.sh@945 -- # kill 118600 00:11:43.315 05:30:47 -- common/autotest_common.sh@950 -- # wait 118600 00:11:45.221 00:11:45.221 real 0m4.071s 00:11:45.221 user 0m4.572s 00:11:45.221 sys 0m0.582s 00:11:45.221 05:30:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:45.221 05:30:49 -- common/autotest_common.sh@10 -- # set +x 00:11:45.221 ************************************ 00:11:45.221 END TEST app_cmdline 00:11:45.221 ************************************ 00:11:45.221 05:30:49 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:45.221 05:30:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:45.221 05:30:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:45.221 05:30:49 -- common/autotest_common.sh@10 -- # set +x 00:11:45.221 ************************************ 00:11:45.221 START TEST version 00:11:45.221 ************************************ 00:11:45.221 05:30:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:45.221 * Looking for test storage... 00:11:45.221 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:45.221 05:30:49 -- app/version.sh@17 -- # get_header_version major 00:11:45.221 05:30:49 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:45.221 05:30:49 -- app/version.sh@14 -- # tr -d '"' 00:11:45.221 05:30:49 -- app/version.sh@14 -- # cut -f2 00:11:45.221 05:30:49 -- app/version.sh@17 -- # major=24 00:11:45.221 05:30:49 -- app/version.sh@18 -- # get_header_version minor 00:11:45.221 05:30:49 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:45.221 05:30:49 -- app/version.sh@14 -- # cut -f2 00:11:45.221 05:30:49 -- app/version.sh@14 -- # tr -d '"' 00:11:45.221 05:30:49 -- app/version.sh@18 -- # minor=1 00:11:45.221 05:30:49 -- app/version.sh@19 -- # get_header_version patch 00:11:45.222 05:30:49 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:45.222 05:30:49 -- app/version.sh@14 -- # cut -f2 00:11:45.222 05:30:49 -- app/version.sh@14 -- # tr -d '"' 00:11:45.222 05:30:49 -- app/version.sh@19 -- # patch=1 00:11:45.222 05:30:49 -- app/version.sh@20 -- # get_header_version suffix 00:11:45.222 05:30:49 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:45.222 05:30:49 -- app/version.sh@14 -- # cut -f2 00:11:45.222 05:30:49 -- app/version.sh@14 -- # tr -d '"' 00:11:45.222 05:30:49 -- app/version.sh@20 -- # suffix=-pre 00:11:45.222 05:30:49 -- app/version.sh@22 -- # version=24.1 00:11:45.222 05:30:49 -- app/version.sh@25 -- # (( patch != 0 )) 00:11:45.222 05:30:49 -- app/version.sh@25 -- # version=24.1.1 00:11:45.222 05:30:49 -- app/version.sh@28 -- # version=24.1.1rc0 00:11:45.222 05:30:49 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:45.222 05:30:49 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:45.500 05:30:49 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:11:45.500 05:30:49 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:11:45.500 00:11:45.500 real 0m0.137s 00:11:45.500 user 0m0.116s 00:11:45.500 sys 0m0.052s 00:11:45.500 05:30:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:45.500 ************************************ 00:11:45.500 END TEST version 00:11:45.500 ************************************ 00:11:45.500 05:30:49 -- common/autotest_common.sh@10 -- # set +x 00:11:45.500 05:30:49 -- spdk/autotest.sh@194 -- # '[' 1 -eq 1 ']' 00:11:45.500 05:30:49 -- spdk/autotest.sh@195 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:11:45.500 05:30:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:45.500 05:30:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:45.500 05:30:49 -- common/autotest_common.sh@10 -- # set +x 00:11:45.500 ************************************ 00:11:45.500 START TEST blockdev_general 00:11:45.500 ************************************ 00:11:45.500 05:30:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:11:45.500 * Looking for test storage... 00:11:45.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:45.500 05:30:49 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:45.500 05:30:49 -- bdev/nbd_common.sh@6 -- # set -e 00:11:45.500 05:30:49 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:45.500 05:30:49 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:45.500 05:30:49 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:45.500 05:30:49 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:45.500 05:30:49 -- bdev/blockdev.sh@18 -- # : 00:11:45.500 05:30:49 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:11:45.500 05:30:49 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:11:45.500 05:30:49 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:11:45.500 05:30:49 -- bdev/blockdev.sh@672 -- # uname -s 00:11:45.500 05:30:49 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:11:45.500 05:30:49 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:11:45.500 05:30:49 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:11:45.500 05:30:49 -- bdev/blockdev.sh@681 -- # crypto_device= 00:11:45.500 05:30:49 -- bdev/blockdev.sh@682 -- # dek= 00:11:45.500 05:30:49 -- bdev/blockdev.sh@683 -- # env_ctx= 00:11:45.500 05:30:49 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:11:45.500 05:30:49 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:11:45.500 05:30:49 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:11:45.500 05:30:49 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:11:45.500 05:30:49 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:11:45.500 05:30:49 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=118915 00:11:45.500 05:30:49 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:45.500 05:30:49 -- bdev/blockdev.sh@47 -- # waitforlisten 118915 00:11:45.500 05:30:49 -- common/autotest_common.sh@819 -- # '[' -z 118915 ']' 00:11:45.500 05:30:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.500 05:30:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:45.500 05:30:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.500 05:30:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:45.500 05:30:49 -- common/autotest_common.sh@10 -- # set +x 00:11:45.500 05:30:49 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:11:45.500 [2024-10-07 05:30:49.421237] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:45.500 [2024-10-07 05:30:49.421610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118915 ] 00:11:45.771 [2024-10-07 05:30:49.591460] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.029 [2024-10-07 05:30:49.833188] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:46.029 [2024-10-07 05:30:49.833471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.596 05:30:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:46.596 05:30:50 -- common/autotest_common.sh@852 -- # return 0 00:11:46.596 05:30:50 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:11:46.596 05:30:50 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:11:46.596 05:30:50 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:11:46.596 05:30:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.596 05:30:50 -- common/autotest_common.sh@10 -- # set +x 00:11:47.163 [2024-10-07 05:30:50.996885] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:47.163 [2024-10-07 05:30:50.996996] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:47.163 00:11:47.163 [2024-10-07 05:30:51.004829] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:47.163 [2024-10-07 05:30:51.005093] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:47.163 00:11:47.163 Malloc0 00:11:47.163 Malloc1 00:11:47.163 Malloc2 00:11:47.421 Malloc3 00:11:47.421 Malloc4 00:11:47.421 Malloc5 00:11:47.421 Malloc6 00:11:47.421 Malloc7 00:11:47.421 Malloc8 00:11:47.421 Malloc9 00:11:47.421 [2024-10-07 05:30:51.370936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:47.421 [2024-10-07 05:30:51.371194] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.421 [2024-10-07 05:30:51.371266] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:11:47.421 [2024-10-07 05:30:51.371412] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.421 [2024-10-07 05:30:51.373560] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.421 [2024-10-07 05:30:51.373756] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:47.421 TestPT 00:11:47.682 05:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:47.682 05:30:51 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:11:47.682 5000+0 records in 00:11:47.682 5000+0 records out 00:11:47.682 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0265892 s, 385 MB/s 00:11:47.682 05:30:51 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:11:47.682 05:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:47.682 05:30:51 -- common/autotest_common.sh@10 -- # set +x 00:11:47.682 AIO0 00:11:47.682 05:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:47.682 05:30:51 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:11:47.682 05:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:47.682 05:30:51 -- common/autotest_common.sh@10 -- # set +x 00:11:47.682 05:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:47.682 05:30:51 -- bdev/blockdev.sh@738 -- # cat 00:11:47.682 05:30:51 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:11:47.682 05:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:47.682 05:30:51 -- common/autotest_common.sh@10 -- # set +x 00:11:47.682 05:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:47.682 05:30:51 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:11:47.682 05:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:47.682 05:30:51 -- common/autotest_common.sh@10 -- # set +x 00:11:47.682 05:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:47.682 05:30:51 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:47.682 05:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:47.682 05:30:51 -- common/autotest_common.sh@10 -- # set +x 00:11:47.682 05:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:47.682 05:30:51 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:11:47.682 05:30:51 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:11:47.682 05:30:51 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:11:47.682 05:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:47.682 05:30:51 -- common/autotest_common.sh@10 -- # set +x 00:11:47.682 05:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:47.682 05:30:51 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:11:47.682 05:30:51 -- bdev/blockdev.sh@747 -- # jq -r .name 00:11:47.683 05:30:51 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "a9931a18-53b2-48e1-a8d2-1a03fdb7f7ea"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a9931a18-53b2-48e1-a8d2-1a03fdb7f7ea",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "0b646633-1e37-500c-ad55-e7bfcf653e5d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "0b646633-1e37-500c-ad55-e7bfcf653e5d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "9fd7a7fa-2ef8-5f29-8462-2751a965728f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "9fd7a7fa-2ef8-5f29-8462-2751a965728f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "bce5c2d6-c823-5285-aa35-274aae8692d3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bce5c2d6-c823-5285-aa35-274aae8692d3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "dcc3d22d-d66b-508b-8dd8-3d4ef838efac"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "dcc3d22d-d66b-508b-8dd8-3d4ef838efac",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "424d2446-c282-5c64-b042-a58b9ed43e62"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "424d2446-c282-5c64-b042-a58b9ed43e62",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "a7f04ce2-dd89-5790-bb02-9fbd13853d9c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a7f04ce2-dd89-5790-bb02-9fbd13853d9c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "428adfe9-1782-5438-a093-eb786fbbf222"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "428adfe9-1782-5438-a093-eb786fbbf222",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "36981328-bda0-51a2-b9ba-08e6b6ea7690"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "36981328-bda0-51a2-b9ba-08e6b6ea7690",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "6b4681b7-2a04-5959-a144-f9c26e29a3d9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6b4681b7-2a04-5959-a144-f9c26e29a3d9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "92a6dfff-437d-577d-b40f-896dac1e4d97"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "92a6dfff-437d-577d-b40f-896dac1e4d97",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "1650caa1-5031-50f9-8280-b9dedc688535"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "1650caa1-5031-50f9-8280-b9dedc688535",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "e34b9df3-3e02-4c67-880f-224bb7c0d0f1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "e34b9df3-3e02-4c67-880f-224bb7c0d0f1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "e34b9df3-3e02-4c67-880f-224bb7c0d0f1",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "97e631f8-d36a-4051-9c27-88793239decd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "8e6e0a45-324d-4e82-9761-38cb2d632da0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "2fe3a24b-f5d8-4c45-873e-b6d6a55f4357"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2fe3a24b-f5d8-4c45-873e-b6d6a55f4357",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "2fe3a24b-f5d8-4c45-873e-b6d6a55f4357",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "7dadbf68-ae39-4c59-b64c-be22642bcb03",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "ef574f76-a03f-4230-855d-010861fe6799",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "fbb94a2f-ffc9-424d-9815-a6d717d7d234"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fbb94a2f-ffc9-424d-9815-a6d717d7d234",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "fbb94a2f-ffc9-424d-9815-a6d717d7d234",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "7df0b244-89ab-42e8-a7ab-70981b6ad1f5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "3d2decfd-a688-4c2e-8e4f-142451992433",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "57fb395e-21c4-4932-83eb-e7cbb243cf94"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "57fb395e-21c4-4932-83eb-e7cbb243cf94",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:11:47.942 05:30:51 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:11:47.942 05:30:51 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:11:47.942 05:30:51 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:11:47.942 05:30:51 -- bdev/blockdev.sh@752 -- # killprocess 118915 00:11:47.942 05:30:51 -- common/autotest_common.sh@926 -- # '[' -z 118915 ']' 00:11:47.942 05:30:51 -- common/autotest_common.sh@930 -- # kill -0 118915 00:11:47.942 05:30:51 -- common/autotest_common.sh@931 -- # uname 00:11:47.942 05:30:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:47.942 05:30:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118915 00:11:47.942 05:30:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:47.942 05:30:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:47.942 05:30:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118915' 00:11:47.942 killing process with pid 118915 00:11:47.942 05:30:51 -- common/autotest_common.sh@945 -- # kill 118915 00:11:47.942 05:30:51 -- common/autotest_common.sh@950 -- # wait 118915 00:11:50.476 05:30:54 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:50.476 05:30:54 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:50.476 05:30:54 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:50.476 05:30:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:50.476 05:30:54 -- common/autotest_common.sh@10 -- # set +x 00:11:50.734 ************************************ 00:11:50.734 START TEST bdev_hello_world 00:11:50.734 ************************************ 00:11:50.734 05:30:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:50.734 [2024-10-07 05:30:54.529852] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:50.734 [2024-10-07 05:30:54.530245] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119122 ] 00:11:50.734 [2024-10-07 05:30:54.697294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.992 [2024-10-07 05:30:54.917859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.560 [2024-10-07 05:30:55.301903] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:51.560 [2024-10-07 05:30:55.302275] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:51.560 [2024-10-07 05:30:55.309850] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:51.560 [2024-10-07 05:30:55.310080] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:51.560 [2024-10-07 05:30:55.317877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:51.560 [2024-10-07 05:30:55.318073] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:51.560 [2024-10-07 05:30:55.318214] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:51.560 [2024-10-07 05:30:55.519054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:51.560 [2024-10-07 05:30:55.519438] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.560 [2024-10-07 05:30:55.519550] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:11:51.560 [2024-10-07 05:30:55.519831] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.560 [2024-10-07 05:30:55.522280] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.560 [2024-10-07 05:30:55.522471] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:52.129 [2024-10-07 05:30:55.835773] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:52.129 [2024-10-07 05:30:55.836161] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:11:52.129 [2024-10-07 05:30:55.836310] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:52.129 [2024-10-07 05:30:55.836548] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:52.129 [2024-10-07 05:30:55.836824] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:52.129 [2024-10-07 05:30:55.837030] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:52.129 [2024-10-07 05:30:55.837289] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:52.129 00:11:52.129 [2024-10-07 05:30:55.837507] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:54.032 ************************************ 00:11:54.032 END TEST bdev_hello_world 00:11:54.032 ************************************ 00:11:54.032 00:11:54.032 real 0m3.342s 00:11:54.032 user 0m2.723s 00:11:54.032 sys 0m0.452s 00:11:54.033 05:30:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:54.033 05:30:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.033 05:30:57 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:11:54.033 05:30:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:54.033 05:30:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:54.033 05:30:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.033 ************************************ 00:11:54.033 START TEST bdev_bounds 00:11:54.033 ************************************ 00:11:54.033 05:30:57 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:11:54.033 05:30:57 -- bdev/blockdev.sh@288 -- # bdevio_pid=119295 00:11:54.033 05:30:57 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:54.033 05:30:57 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:54.033 Process bdevio pid: 119295 00:11:54.033 05:30:57 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 119295' 00:11:54.033 05:30:57 -- bdev/blockdev.sh@291 -- # waitforlisten 119295 00:11:54.033 05:30:57 -- common/autotest_common.sh@819 -- # '[' -z 119295 ']' 00:11:54.033 05:30:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.033 05:30:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:54.033 05:30:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.033 05:30:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:54.033 05:30:57 -- common/autotest_common.sh@10 -- # set +x 00:11:54.033 [2024-10-07 05:30:57.918135] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:54.033 [2024-10-07 05:30:57.918537] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119295 ] 00:11:54.291 [2024-10-07 05:30:58.087152] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:54.554 [2024-10-07 05:30:58.305915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.554 [2024-10-07 05:30:58.305986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:54.554 [2024-10-07 05:30:58.305992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.816 [2024-10-07 05:30:58.693813] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:54.816 [2024-10-07 05:30:58.694209] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:54.816 [2024-10-07 05:30:58.701775] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:54.816 [2024-10-07 05:30:58.702015] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:54.816 [2024-10-07 05:30:58.709803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:54.816 [2024-10-07 05:30:58.710000] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:54.816 [2024-10-07 05:30:58.710154] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:55.077 [2024-10-07 05:30:58.904319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:55.077 [2024-10-07 05:30:58.904854] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.077 [2024-10-07 05:30:58.905053] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:11:55.077 [2024-10-07 05:30:58.905194] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.077 [2024-10-07 05:30:58.907828] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.077 [2024-10-07 05:30:58.908032] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:55.644 05:30:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:55.644 05:30:59 -- common/autotest_common.sh@852 -- # return 0 00:11:55.644 05:30:59 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:55.903 I/O targets: 00:11:55.903 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:11:55.903 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:11:55.903 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:11:55.903 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:11:55.903 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:11:55.903 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:11:55.903 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:11:55.903 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:11:55.903 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:11:55.903 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:11:55.903 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:11:55.903 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:11:55.903 raid0: 131072 blocks of 512 bytes (64 MiB) 00:11:55.903 concat0: 131072 blocks of 512 bytes (64 MiB) 00:11:55.903 raid1: 65536 blocks of 512 bytes (32 MiB) 00:11:55.903 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:11:55.903 00:11:55.903 00:11:55.903 CUnit - A unit testing framework for C - Version 2.1-3 00:11:55.903 http://cunit.sourceforge.net/ 00:11:55.903 00:11:55.903 00:11:55.903 Suite: bdevio tests on: AIO0 00:11:55.903 Test: blockdev write read block ...passed 00:11:55.903 Test: blockdev write zeroes read block ...passed 00:11:55.903 Test: blockdev write zeroes read no split ...passed 00:11:55.903 Test: blockdev write zeroes read split ...passed 00:11:55.903 Test: blockdev write zeroes read split partial ...passed 00:11:55.903 Test: blockdev reset ...passed 00:11:55.903 Test: blockdev write read 8 blocks ...passed 00:11:55.903 Test: blockdev write read size > 128k ...passed 00:11:55.903 Test: blockdev write read invalid size ...passed 00:11:55.903 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:55.903 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:55.903 Test: blockdev write read max offset ...passed 00:11:55.903 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:55.903 Test: blockdev writev readv 8 blocks ...passed 00:11:55.903 Test: blockdev writev readv 30 x 1block ...passed 00:11:55.903 Test: blockdev writev readv block ...passed 00:11:55.903 Test: blockdev writev readv size > 128k ...passed 00:11:55.903 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:55.903 Test: blockdev comparev and writev ...passed 00:11:55.903 Test: blockdev nvme passthru rw ...passed 00:11:55.903 Test: blockdev nvme passthru vendor specific ...passed 00:11:55.903 Test: blockdev nvme admin passthru ...passed 00:11:55.903 Test: blockdev copy ...passed 00:11:55.903 Suite: bdevio tests on: raid1 00:11:55.903 Test: blockdev write read block ...passed 00:11:55.903 Test: blockdev write zeroes read block ...passed 00:11:55.903 Test: blockdev write zeroes read no split ...passed 00:11:55.903 Test: blockdev write zeroes read split ...passed 00:11:55.903 Test: blockdev write zeroes read split partial ...passed 00:11:55.903 Test: blockdev reset ...passed 00:11:55.903 Test: blockdev write read 8 blocks ...passed 00:11:55.903 Test: blockdev write read size > 128k ...passed 00:11:55.903 Test: blockdev write read invalid size ...passed 00:11:55.903 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:55.903 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:55.903 Test: blockdev write read max offset ...passed 00:11:55.903 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:55.903 Test: blockdev writev readv 8 blocks ...passed 00:11:55.903 Test: blockdev writev readv 30 x 1block ...passed 00:11:55.903 Test: blockdev writev readv block ...passed 00:11:55.903 Test: blockdev writev readv size > 128k ...passed 00:11:55.903 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:55.903 Test: blockdev comparev and writev ...passed 00:11:55.903 Test: blockdev nvme passthru rw ...passed 00:11:55.903 Test: blockdev nvme passthru vendor specific ...passed 00:11:55.903 Test: blockdev nvme admin passthru ...passed 00:11:55.903 Test: blockdev copy ...passed 00:11:55.903 Suite: bdevio tests on: concat0 00:11:55.903 Test: blockdev write read block ...passed 00:11:55.903 Test: blockdev write zeroes read block ...passed 00:11:55.903 Test: blockdev write zeroes read no split ...passed 00:11:55.903 Test: blockdev write zeroes read split ...passed 00:11:55.903 Test: blockdev write zeroes read split partial ...passed 00:11:55.903 Test: blockdev reset ...passed 00:11:55.903 Test: blockdev write read 8 blocks ...passed 00:11:55.903 Test: blockdev write read size > 128k ...passed 00:11:55.903 Test: blockdev write read invalid size ...passed 00:11:55.903 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:55.903 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:55.903 Test: blockdev write read max offset ...passed 00:11:55.903 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:55.903 Test: blockdev writev readv 8 blocks ...passed 00:11:55.903 Test: blockdev writev readv 30 x 1block ...passed 00:11:55.903 Test: blockdev writev readv block ...passed 00:11:55.903 Test: blockdev writev readv size > 128k ...passed 00:11:55.903 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:55.903 Test: blockdev comparev and writev ...passed 00:11:55.903 Test: blockdev nvme passthru rw ...passed 00:11:55.903 Test: blockdev nvme passthru vendor specific ...passed 00:11:55.903 Test: blockdev nvme admin passthru ...passed 00:11:55.903 Test: blockdev copy ...passed 00:11:55.903 Suite: bdevio tests on: raid0 00:11:55.903 Test: blockdev write read block ...passed 00:11:55.903 Test: blockdev write zeroes read block ...passed 00:11:55.903 Test: blockdev write zeroes read no split ...passed 00:11:55.903 Test: blockdev write zeroes read split ...passed 00:11:55.903 Test: blockdev write zeroes read split partial ...passed 00:11:55.903 Test: blockdev reset ...passed 00:11:55.903 Test: blockdev write read 8 blocks ...passed 00:11:55.903 Test: blockdev write read size > 128k ...passed 00:11:55.903 Test: blockdev write read invalid size ...passed 00:11:55.903 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:55.903 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:55.903 Test: blockdev write read max offset ...passed 00:11:55.903 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:55.903 Test: blockdev writev readv 8 blocks ...passed 00:11:55.903 Test: blockdev writev readv 30 x 1block ...passed 00:11:55.903 Test: blockdev writev readv block ...passed 00:11:56.162 Test: blockdev writev readv size > 128k ...passed 00:11:56.162 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:56.162 Test: blockdev comparev and writev ...passed 00:11:56.162 Test: blockdev nvme passthru rw ...passed 00:11:56.162 Test: blockdev nvme passthru vendor specific ...passed 00:11:56.162 Test: blockdev nvme admin passthru ...passed 00:11:56.162 Test: blockdev copy ...passed 00:11:56.162 Suite: bdevio tests on: TestPT 00:11:56.162 Test: blockdev write read block ...passed 00:11:56.162 Test: blockdev write zeroes read block ...passed 00:11:56.162 Test: blockdev write zeroes read no split ...passed 00:11:56.162 Test: blockdev write zeroes read split ...passed 00:11:56.162 Test: blockdev write zeroes read split partial ...passed 00:11:56.162 Test: blockdev reset ...passed 00:11:56.162 Test: blockdev write read 8 blocks ...passed 00:11:56.162 Test: blockdev write read size > 128k ...passed 00:11:56.162 Test: blockdev write read invalid size ...passed 00:11:56.162 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:56.162 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:56.162 Test: blockdev write read max offset ...passed 00:11:56.162 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:56.162 Test: blockdev writev readv 8 blocks ...passed 00:11:56.162 Test: blockdev writev readv 30 x 1block ...passed 00:11:56.162 Test: blockdev writev readv block ...passed 00:11:56.162 Test: blockdev writev readv size > 128k ...passed 00:11:56.162 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:56.162 Test: blockdev comparev and writev ...passed 00:11:56.162 Test: blockdev nvme passthru rw ...passed 00:11:56.162 Test: blockdev nvme passthru vendor specific ...passed 00:11:56.162 Test: blockdev nvme admin passthru ...passed 00:11:56.162 Test: blockdev copy ...passed 00:11:56.162 Suite: bdevio tests on: Malloc2p7 00:11:56.162 Test: blockdev write read block ...passed 00:11:56.162 Test: blockdev write zeroes read block ...passed 00:11:56.162 Test: blockdev write zeroes read no split ...passed 00:11:56.162 Test: blockdev write zeroes read split ...passed 00:11:56.162 Test: blockdev write zeroes read split partial ...passed 00:11:56.162 Test: blockdev reset ...passed 00:11:56.162 Test: blockdev write read 8 blocks ...passed 00:11:56.162 Test: blockdev write read size > 128k ...passed 00:11:56.162 Test: blockdev write read invalid size ...passed 00:11:56.162 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:56.162 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:56.162 Test: blockdev write read max offset ...passed 00:11:56.162 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:56.162 Test: blockdev writev readv 8 blocks ...passed 00:11:56.162 Test: blockdev writev readv 30 x 1block ...passed 00:11:56.162 Test: blockdev writev readv block ...passed 00:11:56.162 Test: blockdev writev readv size > 128k ...passed 00:11:56.162 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:56.162 Test: blockdev comparev and writev ...passed 00:11:56.162 Test: blockdev nvme passthru rw ...passed 00:11:56.162 Test: blockdev nvme passthru vendor specific ...passed 00:11:56.162 Test: blockdev nvme admin passthru ...passed 00:11:56.162 Test: blockdev copy ...passed 00:11:56.162 Suite: bdevio tests on: Malloc2p6 00:11:56.162 Test: blockdev write read block ...passed 00:11:56.162 Test: blockdev write zeroes read block ...passed 00:11:56.162 Test: blockdev write zeroes read no split ...passed 00:11:56.162 Test: blockdev write zeroes read split ...passed 00:11:56.162 Test: blockdev write zeroes read split partial ...passed 00:11:56.162 Test: blockdev reset ...passed 00:11:56.162 Test: blockdev write read 8 blocks ...passed 00:11:56.162 Test: blockdev write read size > 128k ...passed 00:11:56.162 Test: blockdev write read invalid size ...passed 00:11:56.162 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:56.162 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:56.162 Test: blockdev write read max offset ...passed 00:11:56.162 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:56.162 Test: blockdev writev readv 8 blocks ...passed 00:11:56.162 Test: blockdev writev readv 30 x 1block ...passed 00:11:56.162 Test: blockdev writev readv block ...passed 00:11:56.163 Test: blockdev writev readv size > 128k ...passed 00:11:56.163 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:56.163 Test: blockdev comparev and writev ...passed 00:11:56.163 Test: blockdev nvme passthru rw ...passed 00:11:56.163 Test: blockdev nvme passthru vendor specific ...passed 00:11:56.163 Test: blockdev nvme admin passthru ...passed 00:11:56.163 Test: blockdev copy ...passed 00:11:56.163 Suite: bdevio tests on: Malloc2p5 00:11:56.163 Test: blockdev write read block ...passed 00:11:56.163 Test: blockdev write zeroes read block ...passed 00:11:56.163 Test: blockdev write zeroes read no split ...passed 00:11:56.163 Test: blockdev write zeroes read split ...passed 00:11:56.163 Test: blockdev write zeroes read split partial ...passed 00:11:56.163 Test: blockdev reset ...passed 00:11:56.163 Test: blockdev write read 8 blocks ...passed 00:11:56.163 Test: blockdev write read size > 128k ...passed 00:11:56.163 Test: blockdev write read invalid size ...passed 00:11:56.163 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:56.163 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:56.163 Test: blockdev write read max offset ...passed 00:11:56.163 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:56.163 Test: blockdev writev readv 8 blocks ...passed 00:11:56.163 Test: blockdev writev readv 30 x 1block ...passed 00:11:56.163 Test: blockdev writev readv block ...passed 00:11:56.163 Test: blockdev writev readv size > 128k ...passed 00:11:56.163 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:56.163 Test: blockdev comparev and writev ...passed 00:11:56.163 Test: blockdev nvme passthru rw ...passed 00:11:56.163 Test: blockdev nvme passthru vendor specific ...passed 00:11:56.163 Test: blockdev nvme admin passthru ...passed 00:11:56.163 Test: blockdev copy ...passed 00:11:56.163 Suite: bdevio tests on: Malloc2p4 00:11:56.163 Test: blockdev write read block ...passed 00:11:56.163 Test: blockdev write zeroes read block ...passed 00:11:56.163 Test: blockdev write zeroes read no split ...passed 00:11:56.163 Test: blockdev write zeroes read split ...passed 00:11:56.422 Test: blockdev write zeroes read split partial ...passed 00:11:56.422 Test: blockdev reset ...passed 00:11:56.422 Test: blockdev write read 8 blocks ...passed 00:11:56.422 Test: blockdev write read size > 128k ...passed 00:11:56.422 Test: blockdev write read invalid size ...passed 00:11:56.422 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:56.422 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:56.422 Test: blockdev write read max offset ...passed 00:11:56.422 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:56.422 Test: blockdev writev readv 8 blocks ...passed 00:11:56.422 Test: blockdev writev readv 30 x 1block ...passed 00:11:56.422 Test: blockdev writev readv block ...passed 00:11:56.422 Test: blockdev writev readv size > 128k ...passed 00:11:56.422 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:56.422 Test: blockdev comparev and writev ...passed 00:11:56.422 Test: blockdev nvme passthru rw ...passed 00:11:56.422 Test: blockdev nvme passthru vendor specific ...passed 00:11:56.422 Test: blockdev nvme admin passthru ...passed 00:11:56.422 Test: blockdev copy ...passed 00:11:56.422 Suite: bdevio tests on: Malloc2p3 00:11:56.422 Test: blockdev write read block ...passed 00:11:56.422 Test: blockdev write zeroes read block ...passed 00:11:56.422 Test: blockdev write zeroes read no split ...passed 00:11:56.422 Test: blockdev write zeroes read split ...passed 00:11:56.422 Test: blockdev write zeroes read split partial ...passed 00:11:56.422 Test: blockdev reset ...passed 00:11:56.422 Test: blockdev write read 8 blocks ...passed 00:11:56.422 Test: blockdev write read size > 128k ...passed 00:11:56.422 Test: blockdev write read invalid size ...passed 00:11:56.422 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:56.422 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:56.422 Test: blockdev write read max offset ...passed 00:11:56.422 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:56.422 Test: blockdev writev readv 8 blocks ...passed 00:11:56.422 Test: blockdev writev readv 30 x 1block ...passed 00:11:56.422 Test: blockdev writev readv block ...passed 00:11:56.422 Test: blockdev writev readv size > 128k ...passed 00:11:56.422 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:56.422 Test: blockdev comparev and writev ...passed 00:11:56.422 Test: blockdev nvme passthru rw ...passed 00:11:56.422 Test: blockdev nvme passthru vendor specific ...passed 00:11:56.422 Test: blockdev nvme admin passthru ...passed 00:11:56.422 Test: blockdev copy ...passed 00:11:56.422 Suite: bdevio tests on: Malloc2p2 00:11:56.422 Test: blockdev write read block ...passed 00:11:56.422 Test: blockdev write zeroes read block ...passed 00:11:56.422 Test: blockdev write zeroes read no split ...passed 00:11:56.422 Test: blockdev write zeroes read split ...passed 00:11:56.422 Test: blockdev write zeroes read split partial ...passed 00:11:56.422 Test: blockdev reset ...passed 00:11:56.422 Test: blockdev write read 8 blocks ...passed 00:11:56.422 Test: blockdev write read size > 128k ...passed 00:11:56.422 Test: blockdev write read invalid size ...passed 00:11:56.422 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:56.422 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:56.422 Test: blockdev write read max offset ...passed 00:11:56.422 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:56.422 Test: blockdev writev readv 8 blocks ...passed 00:11:56.422 Test: blockdev writev readv 30 x 1block ...passed 00:11:56.422 Test: blockdev writev readv block ...passed 00:11:56.422 Test: blockdev writev readv size > 128k ...passed 00:11:56.422 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:56.422 Test: blockdev comparev and writev ...passed 00:11:56.422 Test: blockdev nvme passthru rw ...passed 00:11:56.422 Test: blockdev nvme passthru vendor specific ...passed 00:11:56.422 Test: blockdev nvme admin passthru ...passed 00:11:56.422 Test: blockdev copy ...passed 00:11:56.422 Suite: bdevio tests on: Malloc2p1 00:11:56.422 Test: blockdev write read block ...passed 00:11:56.422 Test: blockdev write zeroes read block ...passed 00:11:56.422 Test: blockdev write zeroes read no split ...passed 00:11:56.422 Test: blockdev write zeroes read split ...passed 00:11:56.422 Test: blockdev write zeroes read split partial ...passed 00:11:56.422 Test: blockdev reset ...passed 00:11:56.422 Test: blockdev write read 8 blocks ...passed 00:11:56.422 Test: blockdev write read size > 128k ...passed 00:11:56.422 Test: blockdev write read invalid size ...passed 00:11:56.422 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:56.422 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:56.422 Test: blockdev write read max offset ...passed 00:11:56.422 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:56.422 Test: blockdev writev readv 8 blocks ...passed 00:11:56.422 Test: blockdev writev readv 30 x 1block ...passed 00:11:56.422 Test: blockdev writev readv block ...passed 00:11:56.422 Test: blockdev writev readv size > 128k ...passed 00:11:56.422 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:56.422 Test: blockdev comparev and writev ...passed 00:11:56.422 Test: blockdev nvme passthru rw ...passed 00:11:56.422 Test: blockdev nvme passthru vendor specific ...passed 00:11:56.422 Test: blockdev nvme admin passthru ...passed 00:11:56.422 Test: blockdev copy ...passed 00:11:56.422 Suite: bdevio tests on: Malloc2p0 00:11:56.422 Test: blockdev write read block ...passed 00:11:56.422 Test: blockdev write zeroes read block ...passed 00:11:56.422 Test: blockdev write zeroes read no split ...passed 00:11:56.422 Test: blockdev write zeroes read split ...passed 00:11:56.422 Test: blockdev write zeroes read split partial ...passed 00:11:56.422 Test: blockdev reset ...passed 00:11:56.422 Test: blockdev write read 8 blocks ...passed 00:11:56.422 Test: blockdev write read size > 128k ...passed 00:11:56.422 Test: blockdev write read invalid size ...passed 00:11:56.422 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:56.422 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:56.422 Test: blockdev write read max offset ...passed 00:11:56.422 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:56.422 Test: blockdev writev readv 8 blocks ...passed 00:11:56.422 Test: blockdev writev readv 30 x 1block ...passed 00:11:56.422 Test: blockdev writev readv block ...passed 00:11:56.422 Test: blockdev writev readv size > 128k ...passed 00:11:56.422 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:56.422 Test: blockdev comparev and writev ...passed 00:11:56.422 Test: blockdev nvme passthru rw ...passed 00:11:56.422 Test: blockdev nvme passthru vendor specific ...passed 00:11:56.422 Test: blockdev nvme admin passthru ...passed 00:11:56.422 Test: blockdev copy ...passed 00:11:56.422 Suite: bdevio tests on: Malloc1p1 00:11:56.422 Test: blockdev write read block ...passed 00:11:56.422 Test: blockdev write zeroes read block ...passed 00:11:56.422 Test: blockdev write zeroes read no split ...passed 00:11:56.422 Test: blockdev write zeroes read split ...passed 00:11:56.681 Test: blockdev write zeroes read split partial ...passed 00:11:56.681 Test: blockdev reset ...passed 00:11:56.681 Test: blockdev write read 8 blocks ...passed 00:11:56.681 Test: blockdev write read size > 128k ...passed 00:11:56.681 Test: blockdev write read invalid size ...passed 00:11:56.681 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:56.681 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:56.681 Test: blockdev write read max offset ...passed 00:11:56.681 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:56.681 Test: blockdev writev readv 8 blocks ...passed 00:11:56.681 Test: blockdev writev readv 30 x 1block ...passed 00:11:56.681 Test: blockdev writev readv block ...passed 00:11:56.681 Test: blockdev writev readv size > 128k ...passed 00:11:56.681 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:56.681 Test: blockdev comparev and writev ...passed 00:11:56.681 Test: blockdev nvme passthru rw ...passed 00:11:56.681 Test: blockdev nvme passthru vendor specific ...passed 00:11:56.681 Test: blockdev nvme admin passthru ...passed 00:11:56.681 Test: blockdev copy ...passed 00:11:56.681 Suite: bdevio tests on: Malloc1p0 00:11:56.681 Test: blockdev write read block ...passed 00:11:56.681 Test: blockdev write zeroes read block ...passed 00:11:56.681 Test: blockdev write zeroes read no split ...passed 00:11:56.681 Test: blockdev write zeroes read split ...passed 00:11:56.681 Test: blockdev write zeroes read split partial ...passed 00:11:56.681 Test: blockdev reset ...passed 00:11:56.681 Test: blockdev write read 8 blocks ...passed 00:11:56.681 Test: blockdev write read size > 128k ...passed 00:11:56.681 Test: blockdev write read invalid size ...passed 00:11:56.681 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:56.681 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:56.681 Test: blockdev write read max offset ...passed 00:11:56.681 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:56.681 Test: blockdev writev readv 8 blocks ...passed 00:11:56.681 Test: blockdev writev readv 30 x 1block ...passed 00:11:56.681 Test: blockdev writev readv block ...passed 00:11:56.681 Test: blockdev writev readv size > 128k ...passed 00:11:56.681 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:56.681 Test: blockdev comparev and writev ...passed 00:11:56.681 Test: blockdev nvme passthru rw ...passed 00:11:56.681 Test: blockdev nvme passthru vendor specific ...passed 00:11:56.681 Test: blockdev nvme admin passthru ...passed 00:11:56.681 Test: blockdev copy ...passed 00:11:56.681 Suite: bdevio tests on: Malloc0 00:11:56.681 Test: blockdev write read block ...passed 00:11:56.681 Test: blockdev write zeroes read block ...passed 00:11:56.681 Test: blockdev write zeroes read no split ...passed 00:11:56.681 Test: blockdev write zeroes read split ...passed 00:11:56.681 Test: blockdev write zeroes read split partial ...passed 00:11:56.681 Test: blockdev reset ...passed 00:11:56.681 Test: blockdev write read 8 blocks ...passed 00:11:56.681 Test: blockdev write read size > 128k ...passed 00:11:56.681 Test: blockdev write read invalid size ...passed 00:11:56.681 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:56.681 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:56.681 Test: blockdev write read max offset ...passed 00:11:56.681 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:56.681 Test: blockdev writev readv 8 blocks ...passed 00:11:56.681 Test: blockdev writev readv 30 x 1block ...passed 00:11:56.681 Test: blockdev writev readv block ...passed 00:11:56.681 Test: blockdev writev readv size > 128k ...passed 00:11:56.681 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:56.681 Test: blockdev comparev and writev ...passed 00:11:56.681 Test: blockdev nvme passthru rw ...passed 00:11:56.681 Test: blockdev nvme passthru vendor specific ...passed 00:11:56.681 Test: blockdev nvme admin passthru ...passed 00:11:56.681 Test: blockdev copy ...passed 00:11:56.681 00:11:56.681 Run Summary: Type Total Ran Passed Failed Inactive 00:11:56.681 suites 16 16 n/a 0 0 00:11:56.681 tests 368 368 368 0 0 00:11:56.681 asserts 2224 2224 2224 0 n/a 00:11:56.681 00:11:56.681 Elapsed time = 2.428 seconds 00:11:56.681 0 00:11:56.681 05:31:00 -- bdev/blockdev.sh@293 -- # killprocess 119295 00:11:56.681 05:31:00 -- common/autotest_common.sh@926 -- # '[' -z 119295 ']' 00:11:56.681 05:31:00 -- common/autotest_common.sh@930 -- # kill -0 119295 00:11:56.681 05:31:00 -- common/autotest_common.sh@931 -- # uname 00:11:56.681 05:31:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:56.681 05:31:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119295 00:11:56.681 killing process with pid 119295 00:11:56.681 05:31:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:56.681 05:31:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:56.681 05:31:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119295' 00:11:56.681 05:31:00 -- common/autotest_common.sh@945 -- # kill 119295 00:11:56.681 05:31:00 -- common/autotest_common.sh@950 -- # wait 119295 00:11:58.587 05:31:02 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:11:58.587 00:11:58.587 real 0m4.555s 00:11:58.587 user 0m11.629s 00:11:58.587 sys 0m0.597s 00:11:58.587 05:31:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:58.587 05:31:02 -- common/autotest_common.sh@10 -- # set +x 00:11:58.587 ************************************ 00:11:58.587 END TEST bdev_bounds 00:11:58.587 ************************************ 00:11:58.587 05:31:02 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:58.587 05:31:02 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:11:58.587 05:31:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:58.587 05:31:02 -- common/autotest_common.sh@10 -- # set +x 00:11:58.587 ************************************ 00:11:58.587 START TEST bdev_nbd 00:11:58.587 ************************************ 00:11:58.587 05:31:02 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:58.587 05:31:02 -- bdev/blockdev.sh@298 -- # uname -s 00:11:58.587 05:31:02 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:11:58.587 05:31:02 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:58.587 05:31:02 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:58.587 05:31:02 -- bdev/blockdev.sh@302 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:58.587 05:31:02 -- bdev/blockdev.sh@302 -- # local bdev_all 00:11:58.587 05:31:02 -- bdev/blockdev.sh@303 -- # local bdev_num=16 00:11:58.587 05:31:02 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:11:58.588 05:31:02 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:58.588 05:31:02 -- bdev/blockdev.sh@309 -- # local nbd_all 00:11:58.588 05:31:02 -- bdev/blockdev.sh@310 -- # bdev_num=16 00:11:58.588 05:31:02 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:58.588 05:31:02 -- bdev/blockdev.sh@312 -- # local nbd_list 00:11:58.588 05:31:02 -- bdev/blockdev.sh@313 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:58.588 05:31:02 -- bdev/blockdev.sh@313 -- # local bdev_list 00:11:58.588 05:31:02 -- bdev/blockdev.sh@316 -- # nbd_pid=119508 00:11:58.588 05:31:02 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:58.588 05:31:02 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:58.588 05:31:02 -- bdev/blockdev.sh@318 -- # waitforlisten 119508 /var/tmp/spdk-nbd.sock 00:11:58.588 05:31:02 -- common/autotest_common.sh@819 -- # '[' -z 119508 ']' 00:11:58.588 05:31:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:58.588 05:31:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:58.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:58.588 05:31:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:58.588 05:31:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:58.588 05:31:02 -- common/autotest_common.sh@10 -- # set +x 00:11:58.588 [2024-10-07 05:31:02.533039] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:58.588 [2024-10-07 05:31:02.533253] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.846 [2024-10-07 05:31:02.689017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.105 [2024-10-07 05:31:02.909362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.363 [2024-10-07 05:31:03.247656] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:59.363 [2024-10-07 05:31:03.247761] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:59.363 [2024-10-07 05:31:03.255603] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:59.363 [2024-10-07 05:31:03.255703] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:59.363 [2024-10-07 05:31:03.263624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:59.363 [2024-10-07 05:31:03.263687] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:59.363 [2024-10-07 05:31:03.263722] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:59.622 [2024-10-07 05:31:03.450845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:59.622 [2024-10-07 05:31:03.450963] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.623 [2024-10-07 05:31:03.451045] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:11:59.623 [2024-10-07 05:31:03.451078] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.623 [2024-10-07 05:31:03.453736] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.623 [2024-10-07 05:31:03.453793] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:00.557 05:31:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:00.557 05:31:04 -- common/autotest_common.sh@852 -- # return 0 00:12:00.557 05:31:04 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:00.557 05:31:04 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:00.557 05:31:04 -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:00.557 05:31:04 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:00.557 05:31:04 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:00.557 05:31:04 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:00.557 05:31:04 -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:00.557 05:31:04 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:00.557 05:31:04 -- bdev/nbd_common.sh@24 -- # local i 00:12:00.557 05:31:04 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:00.557 05:31:04 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:00.557 05:31:04 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:00.557 05:31:04 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:12:00.842 05:31:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:00.842 05:31:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:00.842 05:31:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:00.842 05:31:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:12:00.842 05:31:04 -- common/autotest_common.sh@857 -- # local i 00:12:00.842 05:31:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:00.842 05:31:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:00.842 05:31:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:12:00.842 05:31:04 -- common/autotest_common.sh@861 -- # break 00:12:00.842 05:31:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:00.842 05:31:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:00.842 05:31:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:00.842 1+0 records in 00:12:00.842 1+0 records out 00:12:00.842 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292374 s, 14.0 MB/s 00:12:00.842 05:31:04 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:00.842 05:31:04 -- common/autotest_common.sh@874 -- # size=4096 00:12:00.842 05:31:04 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:00.842 05:31:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:00.842 05:31:04 -- common/autotest_common.sh@877 -- # return 0 00:12:00.842 05:31:04 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:00.842 05:31:04 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:00.842 05:31:04 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:12:01.101 05:31:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:01.101 05:31:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:01.101 05:31:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:01.101 05:31:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:12:01.101 05:31:04 -- common/autotest_common.sh@857 -- # local i 00:12:01.101 05:31:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:01.101 05:31:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:01.101 05:31:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:12:01.101 05:31:04 -- common/autotest_common.sh@861 -- # break 00:12:01.101 05:31:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:01.101 05:31:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:01.101 05:31:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:01.101 1+0 records in 00:12:01.101 1+0 records out 00:12:01.101 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418221 s, 9.8 MB/s 00:12:01.101 05:31:04 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:01.101 05:31:04 -- common/autotest_common.sh@874 -- # size=4096 00:12:01.101 05:31:04 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:01.101 05:31:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:01.101 05:31:04 -- common/autotest_common.sh@877 -- # return 0 00:12:01.101 05:31:04 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:01.101 05:31:04 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:01.101 05:31:04 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:12:01.359 05:31:05 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:01.359 05:31:05 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:01.359 05:31:05 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:01.359 05:31:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:12:01.359 05:31:05 -- common/autotest_common.sh@857 -- # local i 00:12:01.359 05:31:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:01.359 05:31:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:01.359 05:31:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:12:01.359 05:31:05 -- common/autotest_common.sh@861 -- # break 00:12:01.359 05:31:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:01.359 05:31:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:01.359 05:31:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:01.359 1+0 records in 00:12:01.359 1+0 records out 00:12:01.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00138815 s, 3.0 MB/s 00:12:01.359 05:31:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:01.359 05:31:05 -- common/autotest_common.sh@874 -- # size=4096 00:12:01.359 05:31:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:01.359 05:31:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:01.359 05:31:05 -- common/autotest_common.sh@877 -- # return 0 00:12:01.359 05:31:05 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:01.359 05:31:05 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:01.359 05:31:05 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:12:01.618 05:31:05 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:01.618 05:31:05 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:01.618 05:31:05 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:01.618 05:31:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:12:01.618 05:31:05 -- common/autotest_common.sh@857 -- # local i 00:12:01.618 05:31:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:01.618 05:31:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:01.618 05:31:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:12:01.618 05:31:05 -- common/autotest_common.sh@861 -- # break 00:12:01.618 05:31:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:01.618 05:31:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:01.618 05:31:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:01.618 1+0 records in 00:12:01.618 1+0 records out 00:12:01.618 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450727 s, 9.1 MB/s 00:12:01.618 05:31:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:01.618 05:31:05 -- common/autotest_common.sh@874 -- # size=4096 00:12:01.618 05:31:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:01.618 05:31:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:01.618 05:31:05 -- common/autotest_common.sh@877 -- # return 0 00:12:01.618 05:31:05 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:01.618 05:31:05 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:01.618 05:31:05 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:12:01.876 05:31:05 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:01.876 05:31:05 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:01.876 05:31:05 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:01.876 05:31:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:12:01.876 05:31:05 -- common/autotest_common.sh@857 -- # local i 00:12:01.876 05:31:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:01.876 05:31:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:01.876 05:31:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:12:01.876 05:31:05 -- common/autotest_common.sh@861 -- # break 00:12:01.876 05:31:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:01.876 05:31:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:01.876 05:31:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:01.876 1+0 records in 00:12:01.876 1+0 records out 00:12:01.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377964 s, 10.8 MB/s 00:12:01.876 05:31:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:01.876 05:31:05 -- common/autotest_common.sh@874 -- # size=4096 00:12:01.876 05:31:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:01.876 05:31:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:01.876 05:31:05 -- common/autotest_common.sh@877 -- # return 0 00:12:01.876 05:31:05 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:01.876 05:31:05 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:01.876 05:31:05 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:12:02.444 05:31:06 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:02.444 05:31:06 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:02.444 05:31:06 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:02.444 05:31:06 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:12:02.444 05:31:06 -- common/autotest_common.sh@857 -- # local i 00:12:02.444 05:31:06 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:02.444 05:31:06 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:02.444 05:31:06 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:12:02.444 05:31:06 -- common/autotest_common.sh@861 -- # break 00:12:02.444 05:31:06 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:02.444 05:31:06 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:02.444 05:31:06 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:02.444 1+0 records in 00:12:02.444 1+0 records out 00:12:02.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000688257 s, 6.0 MB/s 00:12:02.444 05:31:06 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:02.444 05:31:06 -- common/autotest_common.sh@874 -- # size=4096 00:12:02.444 05:31:06 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:02.444 05:31:06 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:02.444 05:31:06 -- common/autotest_common.sh@877 -- # return 0 00:12:02.444 05:31:06 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:02.444 05:31:06 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:02.444 05:31:06 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:12:02.703 05:31:06 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:12:02.703 05:31:06 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:12:02.703 05:31:06 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:12:02.703 05:31:06 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:12:02.703 05:31:06 -- common/autotest_common.sh@857 -- # local i 00:12:02.703 05:31:06 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:02.703 05:31:06 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:02.703 05:31:06 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:12:02.703 05:31:06 -- common/autotest_common.sh@861 -- # break 00:12:02.703 05:31:06 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:02.703 05:31:06 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:02.703 05:31:06 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:02.703 1+0 records in 00:12:02.703 1+0 records out 00:12:02.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000743655 s, 5.5 MB/s 00:12:02.703 05:31:06 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:02.703 05:31:06 -- common/autotest_common.sh@874 -- # size=4096 00:12:02.703 05:31:06 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:02.703 05:31:06 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:02.703 05:31:06 -- common/autotest_common.sh@877 -- # return 0 00:12:02.703 05:31:06 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:02.703 05:31:06 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:02.703 05:31:06 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:12:02.962 05:31:06 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:12:02.962 05:31:06 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:12:02.962 05:31:06 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:12:02.962 05:31:06 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:12:02.962 05:31:06 -- common/autotest_common.sh@857 -- # local i 00:12:02.962 05:31:06 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:02.962 05:31:06 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:02.962 05:31:06 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:12:02.962 05:31:06 -- common/autotest_common.sh@861 -- # break 00:12:02.962 05:31:06 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:02.962 05:31:06 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:02.962 05:31:06 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:02.962 1+0 records in 00:12:02.962 1+0 records out 00:12:02.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000612811 s, 6.7 MB/s 00:12:02.962 05:31:06 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:02.962 05:31:06 -- common/autotest_common.sh@874 -- # size=4096 00:12:02.962 05:31:06 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:02.962 05:31:06 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:02.962 05:31:06 -- common/autotest_common.sh@877 -- # return 0 00:12:02.962 05:31:06 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:02.962 05:31:06 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:02.962 05:31:06 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:12:03.221 05:31:06 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:12:03.221 05:31:06 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:12:03.221 05:31:06 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:12:03.221 05:31:06 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:12:03.221 05:31:06 -- common/autotest_common.sh@857 -- # local i 00:12:03.221 05:31:06 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:03.221 05:31:06 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:03.221 05:31:06 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:12:03.221 05:31:06 -- common/autotest_common.sh@861 -- # break 00:12:03.221 05:31:06 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:03.221 05:31:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:03.221 05:31:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:03.221 1+0 records in 00:12:03.221 1+0 records out 00:12:03.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000792671 s, 5.2 MB/s 00:12:03.221 05:31:07 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.221 05:31:07 -- common/autotest_common.sh@874 -- # size=4096 00:12:03.221 05:31:07 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.221 05:31:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:03.221 05:31:07 -- common/autotest_common.sh@877 -- # return 0 00:12:03.221 05:31:07 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:03.221 05:31:07 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:03.221 05:31:07 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:12:03.479 05:31:07 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:12:03.479 05:31:07 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:12:03.479 05:31:07 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:12:03.479 05:31:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:12:03.479 05:31:07 -- common/autotest_common.sh@857 -- # local i 00:12:03.479 05:31:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:03.479 05:31:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:03.479 05:31:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:12:03.479 05:31:07 -- common/autotest_common.sh@861 -- # break 00:12:03.479 05:31:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:03.479 05:31:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:03.479 05:31:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:03.479 1+0 records in 00:12:03.479 1+0 records out 00:12:03.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000607796 s, 6.7 MB/s 00:12:03.479 05:31:07 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.479 05:31:07 -- common/autotest_common.sh@874 -- # size=4096 00:12:03.479 05:31:07 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.479 05:31:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:03.479 05:31:07 -- common/autotest_common.sh@877 -- # return 0 00:12:03.479 05:31:07 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:03.479 05:31:07 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:03.479 05:31:07 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:12:03.738 05:31:07 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:12:03.738 05:31:07 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:12:03.738 05:31:07 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:12:03.738 05:31:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:12:03.738 05:31:07 -- common/autotest_common.sh@857 -- # local i 00:12:03.738 05:31:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:03.738 05:31:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:03.738 05:31:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:12:03.738 05:31:07 -- common/autotest_common.sh@861 -- # break 00:12:03.738 05:31:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:03.738 05:31:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:03.738 05:31:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:03.738 1+0 records in 00:12:03.738 1+0 records out 00:12:03.738 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000775314 s, 5.3 MB/s 00:12:03.738 05:31:07 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.738 05:31:07 -- common/autotest_common.sh@874 -- # size=4096 00:12:03.738 05:31:07 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.738 05:31:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:03.738 05:31:07 -- common/autotest_common.sh@877 -- # return 0 00:12:03.738 05:31:07 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:03.738 05:31:07 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:03.738 05:31:07 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:12:03.997 05:31:07 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:12:03.997 05:31:07 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:12:03.997 05:31:07 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:12:03.997 05:31:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:12:03.997 05:31:07 -- common/autotest_common.sh@857 -- # local i 00:12:03.997 05:31:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:03.997 05:31:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:03.997 05:31:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:12:03.997 05:31:07 -- common/autotest_common.sh@861 -- # break 00:12:03.997 05:31:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:03.997 05:31:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:03.997 05:31:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:03.997 1+0 records in 00:12:03.997 1+0 records out 00:12:03.997 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000700772 s, 5.8 MB/s 00:12:03.997 05:31:07 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.997 05:31:07 -- common/autotest_common.sh@874 -- # size=4096 00:12:03.997 05:31:07 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.997 05:31:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:03.997 05:31:07 -- common/autotest_common.sh@877 -- # return 0 00:12:03.997 05:31:07 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:03.997 05:31:07 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:03.997 05:31:07 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:12:04.255 05:31:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:12:04.255 05:31:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:12:04.255 05:31:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:12:04.255 05:31:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:12:04.255 05:31:08 -- common/autotest_common.sh@857 -- # local i 00:12:04.255 05:31:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:04.255 05:31:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:04.255 05:31:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:12:04.255 05:31:08 -- common/autotest_common.sh@861 -- # break 00:12:04.255 05:31:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:04.255 05:31:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:04.255 05:31:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:04.255 1+0 records in 00:12:04.255 1+0 records out 00:12:04.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044874 s, 9.1 MB/s 00:12:04.255 05:31:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.255 05:31:08 -- common/autotest_common.sh@874 -- # size=4096 00:12:04.255 05:31:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.255 05:31:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:04.255 05:31:08 -- common/autotest_common.sh@877 -- # return 0 00:12:04.255 05:31:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:04.255 05:31:08 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:04.255 05:31:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:12:04.829 05:31:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:12:04.829 05:31:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:12:04.829 05:31:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:12:04.829 05:31:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:12:04.829 05:31:08 -- common/autotest_common.sh@857 -- # local i 00:12:04.829 05:31:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:04.829 05:31:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:04.829 05:31:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:12:04.829 05:31:08 -- common/autotest_common.sh@861 -- # break 00:12:04.829 05:31:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:04.829 05:31:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:04.829 05:31:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:04.829 1+0 records in 00:12:04.829 1+0 records out 00:12:04.829 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000769217 s, 5.3 MB/s 00:12:04.829 05:31:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.829 05:31:08 -- common/autotest_common.sh@874 -- # size=4096 00:12:04.829 05:31:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.829 05:31:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:04.829 05:31:08 -- common/autotest_common.sh@877 -- # return 0 00:12:04.829 05:31:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:04.829 05:31:08 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:04.829 05:31:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:12:04.829 05:31:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:12:04.829 05:31:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:12:04.829 05:31:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:12:04.829 05:31:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:12:04.829 05:31:08 -- common/autotest_common.sh@857 -- # local i 00:12:04.829 05:31:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:04.829 05:31:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:04.829 05:31:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:12:04.829 05:31:08 -- common/autotest_common.sh@861 -- # break 00:12:04.829 05:31:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:04.829 05:31:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:04.830 05:31:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:04.830 1+0 records in 00:12:04.830 1+0 records out 00:12:04.830 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100583 s, 4.1 MB/s 00:12:04.830 05:31:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.830 05:31:08 -- common/autotest_common.sh@874 -- # size=4096 00:12:04.830 05:31:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.830 05:31:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:04.830 05:31:08 -- common/autotest_common.sh@877 -- # return 0 00:12:04.830 05:31:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:04.830 05:31:08 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:04.830 05:31:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:12:05.088 05:31:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:12:05.088 05:31:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:12:05.088 05:31:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:12:05.088 05:31:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:12:05.088 05:31:08 -- common/autotest_common.sh@857 -- # local i 00:12:05.088 05:31:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:05.088 05:31:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:05.088 05:31:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:12:05.088 05:31:08 -- common/autotest_common.sh@861 -- # break 00:12:05.088 05:31:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:05.088 05:31:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:05.088 05:31:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:05.088 1+0 records in 00:12:05.088 1+0 records out 00:12:05.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127117 s, 3.2 MB/s 00:12:05.088 05:31:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.088 05:31:08 -- common/autotest_common.sh@874 -- # size=4096 00:12:05.088 05:31:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.088 05:31:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:05.088 05:31:08 -- common/autotest_common.sh@877 -- # return 0 00:12:05.088 05:31:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:05.088 05:31:08 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:05.088 05:31:08 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:05.347 05:31:09 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd0", 00:12:05.347 "bdev_name": "Malloc0" 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd1", 00:12:05.347 "bdev_name": "Malloc1p0" 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd2", 00:12:05.347 "bdev_name": "Malloc1p1" 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd3", 00:12:05.347 "bdev_name": "Malloc2p0" 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd4", 00:12:05.347 "bdev_name": "Malloc2p1" 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd5", 00:12:05.347 "bdev_name": "Malloc2p2" 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd6", 00:12:05.347 "bdev_name": "Malloc2p3" 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd7", 00:12:05.347 "bdev_name": "Malloc2p4" 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd8", 00:12:05.347 "bdev_name": "Malloc2p5" 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd9", 00:12:05.347 "bdev_name": "Malloc2p6" 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd10", 00:12:05.347 "bdev_name": "Malloc2p7" 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd11", 00:12:05.347 "bdev_name": "TestPT" 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd12", 00:12:05.347 "bdev_name": "raid0" 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd13", 00:12:05.347 "bdev_name": "concat0" 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd14", 00:12:05.347 "bdev_name": "raid1" 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd15", 00:12:05.347 "bdev_name": "AIO0" 00:12:05.347 } 00:12:05.347 ]' 00:12:05.347 05:31:09 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:05.347 05:31:09 -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd0", 00:12:05.347 "bdev_name": "Malloc0" 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd1", 00:12:05.347 "bdev_name": "Malloc1p0" 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd2", 00:12:05.347 "bdev_name": "Malloc1p1" 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd3", 00:12:05.347 "bdev_name": "Malloc2p0" 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd4", 00:12:05.347 "bdev_name": "Malloc2p1" 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd5", 00:12:05.347 "bdev_name": "Malloc2p2" 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd6", 00:12:05.347 "bdev_name": "Malloc2p3" 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd7", 00:12:05.347 "bdev_name": "Malloc2p4" 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd8", 00:12:05.347 "bdev_name": "Malloc2p5" 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd9", 00:12:05.347 "bdev_name": "Malloc2p6" 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd10", 00:12:05.347 "bdev_name": "Malloc2p7" 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd11", 00:12:05.347 "bdev_name": "TestPT" 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd12", 00:12:05.347 "bdev_name": "raid0" 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd13", 00:12:05.347 "bdev_name": "concat0" 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd14", 00:12:05.347 "bdev_name": "raid1" 00:12:05.347 }, 00:12:05.347 { 00:12:05.347 "nbd_device": "/dev/nbd15", 00:12:05.347 "bdev_name": "AIO0" 00:12:05.347 } 00:12:05.347 ]' 00:12:05.347 05:31:09 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:05.347 05:31:09 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:12:05.347 05:31:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:05.347 05:31:09 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:12:05.347 05:31:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:05.347 05:31:09 -- bdev/nbd_common.sh@51 -- # local i 00:12:05.347 05:31:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:05.347 05:31:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:05.606 05:31:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:05.606 05:31:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:05.606 05:31:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:05.606 05:31:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:05.606 05:31:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:05.606 05:31:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:05.606 05:31:09 -- bdev/nbd_common.sh@41 -- # break 00:12:05.606 05:31:09 -- bdev/nbd_common.sh@45 -- # return 0 00:12:05.606 05:31:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:05.606 05:31:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:05.864 05:31:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:05.864 05:31:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:05.864 05:31:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:05.864 05:31:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:05.864 05:31:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:05.864 05:31:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:05.864 05:31:09 -- bdev/nbd_common.sh@41 -- # break 00:12:05.864 05:31:09 -- bdev/nbd_common.sh@45 -- # return 0 00:12:05.864 05:31:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:05.864 05:31:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:06.430 05:31:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:06.430 05:31:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:06.430 05:31:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:06.430 05:31:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:06.430 05:31:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:06.430 05:31:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:06.430 05:31:10 -- bdev/nbd_common.sh@41 -- # break 00:12:06.430 05:31:10 -- bdev/nbd_common.sh@45 -- # return 0 00:12:06.430 05:31:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.430 05:31:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:06.430 05:31:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:06.430 05:31:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:06.430 05:31:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:06.430 05:31:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:06.430 05:31:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:06.430 05:31:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:06.430 05:31:10 -- bdev/nbd_common.sh@41 -- # break 00:12:06.430 05:31:10 -- bdev/nbd_common.sh@45 -- # return 0 00:12:06.430 05:31:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.430 05:31:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:06.687 05:31:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:06.687 05:31:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:06.687 05:31:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:06.687 05:31:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:06.687 05:31:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:06.687 05:31:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:06.687 05:31:10 -- bdev/nbd_common.sh@41 -- # break 00:12:06.687 05:31:10 -- bdev/nbd_common.sh@45 -- # return 0 00:12:06.687 05:31:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.687 05:31:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:06.944 05:31:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:06.944 05:31:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:06.944 05:31:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:06.944 05:31:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:06.944 05:31:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:06.944 05:31:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:06.944 05:31:10 -- bdev/nbd_common.sh@41 -- # break 00:12:06.944 05:31:10 -- bdev/nbd_common.sh@45 -- # return 0 00:12:06.944 05:31:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.944 05:31:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:07.203 05:31:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:07.203 05:31:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:07.203 05:31:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:07.203 05:31:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:07.203 05:31:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:07.203 05:31:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:07.203 05:31:11 -- bdev/nbd_common.sh@41 -- # break 00:12:07.203 05:31:11 -- bdev/nbd_common.sh@45 -- # return 0 00:12:07.203 05:31:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:07.203 05:31:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:07.461 05:31:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:07.461 05:31:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:07.461 05:31:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:07.461 05:31:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:07.461 05:31:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:07.461 05:31:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:07.461 05:31:11 -- bdev/nbd_common.sh@41 -- # break 00:12:07.461 05:31:11 -- bdev/nbd_common.sh@45 -- # return 0 00:12:07.461 05:31:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:07.461 05:31:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:07.718 05:31:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:07.718 05:31:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:07.718 05:31:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:07.718 05:31:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:07.719 05:31:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:07.719 05:31:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:07.719 05:31:11 -- bdev/nbd_common.sh@41 -- # break 00:12:07.719 05:31:11 -- bdev/nbd_common.sh@45 -- # return 0 00:12:07.719 05:31:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:07.719 05:31:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:07.977 05:31:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:07.977 05:31:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:07.977 05:31:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:07.977 05:31:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:07.977 05:31:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:07.977 05:31:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:07.977 05:31:11 -- bdev/nbd_common.sh@41 -- # break 00:12:07.977 05:31:11 -- bdev/nbd_common.sh@45 -- # return 0 00:12:07.977 05:31:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:07.977 05:31:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:08.234 05:31:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:08.234 05:31:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:08.234 05:31:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:08.234 05:31:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:08.234 05:31:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:08.234 05:31:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:08.234 05:31:12 -- bdev/nbd_common.sh@41 -- # break 00:12:08.234 05:31:12 -- bdev/nbd_common.sh@45 -- # return 0 00:12:08.234 05:31:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:08.234 05:31:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:08.493 05:31:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:08.493 05:31:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:08.493 05:31:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:08.493 05:31:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:08.493 05:31:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:08.493 05:31:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:08.493 05:31:12 -- bdev/nbd_common.sh@41 -- # break 00:12:08.493 05:31:12 -- bdev/nbd_common.sh@45 -- # return 0 00:12:08.493 05:31:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:08.493 05:31:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:08.751 05:31:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:08.751 05:31:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:08.751 05:31:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:08.751 05:31:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:08.751 05:31:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:08.751 05:31:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:08.751 05:31:12 -- bdev/nbd_common.sh@41 -- # break 00:12:08.751 05:31:12 -- bdev/nbd_common.sh@45 -- # return 0 00:12:08.751 05:31:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:08.751 05:31:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:09.009 05:31:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:09.009 05:31:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:09.009 05:31:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:09.009 05:31:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:09.009 05:31:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:09.009 05:31:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:09.009 05:31:12 -- bdev/nbd_common.sh@41 -- # break 00:12:09.009 05:31:12 -- bdev/nbd_common.sh@45 -- # return 0 00:12:09.009 05:31:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:09.009 05:31:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:09.267 05:31:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:09.267 05:31:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:09.267 05:31:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:09.267 05:31:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:09.267 05:31:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:09.267 05:31:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:09.267 05:31:13 -- bdev/nbd_common.sh@41 -- # break 00:12:09.267 05:31:13 -- bdev/nbd_common.sh@45 -- # return 0 00:12:09.267 05:31:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:09.267 05:31:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:09.525 05:31:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:09.525 05:31:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:09.525 05:31:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:09.525 05:31:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:09.525 05:31:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:09.525 05:31:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:09.525 05:31:13 -- bdev/nbd_common.sh@41 -- # break 00:12:09.525 05:31:13 -- bdev/nbd_common.sh@45 -- # return 0 00:12:09.525 05:31:13 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:09.525 05:31:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:09.525 05:31:13 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:09.783 05:31:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:09.783 05:31:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:09.783 05:31:13 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:09.783 05:31:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:09.783 05:31:13 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:09.783 05:31:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:09.783 05:31:13 -- bdev/nbd_common.sh@65 -- # true 00:12:09.783 05:31:13 -- bdev/nbd_common.sh@65 -- # count=0 00:12:09.783 05:31:13 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:09.783 05:31:13 -- bdev/nbd_common.sh@122 -- # count=0 00:12:09.783 05:31:13 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:09.783 05:31:13 -- bdev/nbd_common.sh@127 -- # return 0 00:12:09.783 05:31:13 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:09.783 05:31:13 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:09.783 05:31:13 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:09.783 05:31:13 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:09.783 05:31:13 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:09.783 05:31:13 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:09.783 05:31:13 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:09.783 05:31:13 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:09.783 05:31:13 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:09.783 05:31:13 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:09.783 05:31:13 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:09.783 05:31:13 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:09.783 05:31:13 -- bdev/nbd_common.sh@12 -- # local i 00:12:09.783 05:31:13 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:09.783 05:31:13 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:09.783 05:31:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:10.041 /dev/nbd0 00:12:10.041 05:31:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:10.041 05:31:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:10.041 05:31:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:12:10.041 05:31:13 -- common/autotest_common.sh@857 -- # local i 00:12:10.041 05:31:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:10.041 05:31:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:10.041 05:31:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:12:10.041 05:31:13 -- common/autotest_common.sh@861 -- # break 00:12:10.041 05:31:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:10.041 05:31:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:10.041 05:31:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:10.041 1+0 records in 00:12:10.041 1+0 records out 00:12:10.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204768 s, 20.0 MB/s 00:12:10.041 05:31:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:10.041 05:31:13 -- common/autotest_common.sh@874 -- # size=4096 00:12:10.041 05:31:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:10.041 05:31:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:10.041 05:31:13 -- common/autotest_common.sh@877 -- # return 0 00:12:10.041 05:31:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:10.041 05:31:13 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:10.041 05:31:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:12:10.300 /dev/nbd1 00:12:10.300 05:31:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:10.300 05:31:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:10.300 05:31:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:12:10.300 05:31:14 -- common/autotest_common.sh@857 -- # local i 00:12:10.300 05:31:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:10.300 05:31:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:10.300 05:31:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:12:10.300 05:31:14 -- common/autotest_common.sh@861 -- # break 00:12:10.300 05:31:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:10.300 05:31:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:10.300 05:31:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:10.300 1+0 records in 00:12:10.300 1+0 records out 00:12:10.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000702606 s, 5.8 MB/s 00:12:10.300 05:31:14 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:10.300 05:31:14 -- common/autotest_common.sh@874 -- # size=4096 00:12:10.300 05:31:14 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:10.300 05:31:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:10.300 05:31:14 -- common/autotest_common.sh@877 -- # return 0 00:12:10.300 05:31:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:10.300 05:31:14 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:10.300 05:31:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:12:10.558 /dev/nbd10 00:12:10.558 05:31:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:10.558 05:31:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:10.558 05:31:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:12:10.558 05:31:14 -- common/autotest_common.sh@857 -- # local i 00:12:10.558 05:31:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:10.558 05:31:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:10.558 05:31:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:12:10.558 05:31:14 -- common/autotest_common.sh@861 -- # break 00:12:10.558 05:31:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:10.558 05:31:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:10.558 05:31:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:10.558 1+0 records in 00:12:10.558 1+0 records out 00:12:10.558 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228697 s, 17.9 MB/s 00:12:10.558 05:31:14 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:10.558 05:31:14 -- common/autotest_common.sh@874 -- # size=4096 00:12:10.558 05:31:14 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:10.817 05:31:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:10.817 05:31:14 -- common/autotest_common.sh@877 -- # return 0 00:12:10.817 05:31:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:10.817 05:31:14 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:10.817 05:31:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:12:10.817 /dev/nbd11 00:12:10.817 05:31:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:10.817 05:31:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:10.817 05:31:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:12:10.817 05:31:14 -- common/autotest_common.sh@857 -- # local i 00:12:10.817 05:31:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:10.817 05:31:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:10.817 05:31:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:12:10.817 05:31:14 -- common/autotest_common.sh@861 -- # break 00:12:10.817 05:31:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:10.817 05:31:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:10.817 05:31:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:10.817 1+0 records in 00:12:10.817 1+0 records out 00:12:10.817 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000733278 s, 5.6 MB/s 00:12:10.817 05:31:14 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:10.817 05:31:14 -- common/autotest_common.sh@874 -- # size=4096 00:12:10.817 05:31:14 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:10.817 05:31:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:10.817 05:31:14 -- common/autotest_common.sh@877 -- # return 0 00:12:10.817 05:31:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:10.817 05:31:14 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:10.817 05:31:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:12:11.076 /dev/nbd12 00:12:11.076 05:31:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:11.076 05:31:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:11.076 05:31:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:12:11.076 05:31:15 -- common/autotest_common.sh@857 -- # local i 00:12:11.076 05:31:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:11.076 05:31:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:11.076 05:31:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:12:11.076 05:31:15 -- common/autotest_common.sh@861 -- # break 00:12:11.076 05:31:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:11.076 05:31:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:11.076 05:31:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:11.076 1+0 records in 00:12:11.076 1+0 records out 00:12:11.076 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529369 s, 7.7 MB/s 00:12:11.076 05:31:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.076 05:31:15 -- common/autotest_common.sh@874 -- # size=4096 00:12:11.076 05:31:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.076 05:31:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:11.076 05:31:15 -- common/autotest_common.sh@877 -- # return 0 00:12:11.076 05:31:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:11.076 05:31:15 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:11.076 05:31:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:12:11.642 /dev/nbd13 00:12:11.642 05:31:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:11.642 05:31:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:11.642 05:31:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:12:11.642 05:31:15 -- common/autotest_common.sh@857 -- # local i 00:12:11.642 05:31:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:11.642 05:31:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:11.642 05:31:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:12:11.642 05:31:15 -- common/autotest_common.sh@861 -- # break 00:12:11.642 05:31:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:11.643 05:31:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:11.643 05:31:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:11.643 1+0 records in 00:12:11.643 1+0 records out 00:12:11.643 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0006034 s, 6.8 MB/s 00:12:11.643 05:31:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.643 05:31:15 -- common/autotest_common.sh@874 -- # size=4096 00:12:11.643 05:31:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.643 05:31:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:11.643 05:31:15 -- common/autotest_common.sh@877 -- # return 0 00:12:11.643 05:31:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:11.643 05:31:15 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:11.643 05:31:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:12:11.643 /dev/nbd14 00:12:11.643 05:31:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:12:11.643 05:31:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:12:11.643 05:31:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:12:11.643 05:31:15 -- common/autotest_common.sh@857 -- # local i 00:12:11.643 05:31:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:11.643 05:31:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:11.643 05:31:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:12:11.643 05:31:15 -- common/autotest_common.sh@861 -- # break 00:12:11.643 05:31:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:11.643 05:31:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:11.643 05:31:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:11.643 1+0 records in 00:12:11.643 1+0 records out 00:12:11.643 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000741081 s, 5.5 MB/s 00:12:11.643 05:31:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.643 05:31:15 -- common/autotest_common.sh@874 -- # size=4096 00:12:11.643 05:31:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.643 05:31:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:11.643 05:31:15 -- common/autotest_common.sh@877 -- # return 0 00:12:11.643 05:31:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:11.643 05:31:15 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:11.643 05:31:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:12:11.901 /dev/nbd15 00:12:12.159 05:31:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:12:12.159 05:31:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:12:12.159 05:31:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:12:12.159 05:31:15 -- common/autotest_common.sh@857 -- # local i 00:12:12.159 05:31:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:12.159 05:31:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:12.159 05:31:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:12:12.159 05:31:15 -- common/autotest_common.sh@861 -- # break 00:12:12.159 05:31:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:12.159 05:31:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:12.159 05:31:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:12.159 1+0 records in 00:12:12.159 1+0 records out 00:12:12.159 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000716446 s, 5.7 MB/s 00:12:12.159 05:31:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.159 05:31:15 -- common/autotest_common.sh@874 -- # size=4096 00:12:12.159 05:31:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.159 05:31:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:12.159 05:31:15 -- common/autotest_common.sh@877 -- # return 0 00:12:12.159 05:31:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:12.159 05:31:15 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:12.159 05:31:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:12:12.159 /dev/nbd2 00:12:12.417 05:31:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:12:12.417 05:31:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:12:12.417 05:31:16 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:12:12.417 05:31:16 -- common/autotest_common.sh@857 -- # local i 00:12:12.417 05:31:16 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:12.417 05:31:16 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:12.417 05:31:16 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:12:12.417 05:31:16 -- common/autotest_common.sh@861 -- # break 00:12:12.417 05:31:16 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:12.417 05:31:16 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:12.417 05:31:16 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:12.417 1+0 records in 00:12:12.417 1+0 records out 00:12:12.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000736541 s, 5.6 MB/s 00:12:12.418 05:31:16 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.418 05:31:16 -- common/autotest_common.sh@874 -- # size=4096 00:12:12.418 05:31:16 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.418 05:31:16 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:12.418 05:31:16 -- common/autotest_common.sh@877 -- # return 0 00:12:12.418 05:31:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:12.418 05:31:16 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:12.418 05:31:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:12:12.418 /dev/nbd3 00:12:12.676 05:31:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:12:12.676 05:31:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:12:12.676 05:31:16 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:12:12.676 05:31:16 -- common/autotest_common.sh@857 -- # local i 00:12:12.676 05:31:16 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:12.676 05:31:16 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:12.676 05:31:16 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:12:12.676 05:31:16 -- common/autotest_common.sh@861 -- # break 00:12:12.676 05:31:16 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:12.676 05:31:16 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:12.676 05:31:16 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:12.676 1+0 records in 00:12:12.676 1+0 records out 00:12:12.676 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406637 s, 10.1 MB/s 00:12:12.676 05:31:16 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.676 05:31:16 -- common/autotest_common.sh@874 -- # size=4096 00:12:12.676 05:31:16 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.676 05:31:16 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:12.676 05:31:16 -- common/autotest_common.sh@877 -- # return 0 00:12:12.676 05:31:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:12.676 05:31:16 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:12.676 05:31:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:12:12.676 /dev/nbd4 00:12:12.676 05:31:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:12:12.676 05:31:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:12:12.676 05:31:16 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:12:12.676 05:31:16 -- common/autotest_common.sh@857 -- # local i 00:12:12.676 05:31:16 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:12.676 05:31:16 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:12.676 05:31:16 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:12:12.934 05:31:16 -- common/autotest_common.sh@861 -- # break 00:12:12.934 05:31:16 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:12.934 05:31:16 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:12.934 05:31:16 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:12.934 1+0 records in 00:12:12.934 1+0 records out 00:12:12.934 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000640804 s, 6.4 MB/s 00:12:12.934 05:31:16 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.934 05:31:16 -- common/autotest_common.sh@874 -- # size=4096 00:12:12.934 05:31:16 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:12.934 05:31:16 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:12.934 05:31:16 -- common/autotest_common.sh@877 -- # return 0 00:12:12.934 05:31:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:12.934 05:31:16 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:12.934 05:31:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:12:13.192 /dev/nbd5 00:12:13.192 05:31:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:12:13.192 05:31:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:12:13.192 05:31:16 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:12:13.192 05:31:16 -- common/autotest_common.sh@857 -- # local i 00:12:13.192 05:31:16 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:13.192 05:31:16 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:13.192 05:31:16 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:12:13.192 05:31:16 -- common/autotest_common.sh@861 -- # break 00:12:13.192 05:31:16 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:13.192 05:31:16 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:13.192 05:31:16 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:13.192 1+0 records in 00:12:13.192 1+0 records out 00:12:13.192 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548361 s, 7.5 MB/s 00:12:13.192 05:31:16 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.192 05:31:16 -- common/autotest_common.sh@874 -- # size=4096 00:12:13.192 05:31:16 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.192 05:31:16 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:13.192 05:31:16 -- common/autotest_common.sh@877 -- # return 0 00:12:13.192 05:31:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:13.192 05:31:16 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:13.192 05:31:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:12:13.450 /dev/nbd6 00:12:13.450 05:31:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:12:13.450 05:31:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:12:13.450 05:31:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:12:13.450 05:31:17 -- common/autotest_common.sh@857 -- # local i 00:12:13.450 05:31:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:13.450 05:31:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:13.450 05:31:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:12:13.450 05:31:17 -- common/autotest_common.sh@861 -- # break 00:12:13.450 05:31:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:13.450 05:31:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:13.450 05:31:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:13.450 1+0 records in 00:12:13.450 1+0 records out 00:12:13.450 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000912962 s, 4.5 MB/s 00:12:13.450 05:31:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.450 05:31:17 -- common/autotest_common.sh@874 -- # size=4096 00:12:13.450 05:31:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.450 05:31:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:13.450 05:31:17 -- common/autotest_common.sh@877 -- # return 0 00:12:13.450 05:31:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:13.450 05:31:17 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:13.450 05:31:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:12:13.708 /dev/nbd7 00:12:13.708 05:31:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:12:13.708 05:31:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:12:13.708 05:31:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:12:13.708 05:31:17 -- common/autotest_common.sh@857 -- # local i 00:12:13.708 05:31:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:13.708 05:31:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:13.708 05:31:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:12:13.708 05:31:17 -- common/autotest_common.sh@861 -- # break 00:12:13.708 05:31:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:13.708 05:31:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:13.708 05:31:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:13.708 1+0 records in 00:12:13.708 1+0 records out 00:12:13.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000663939 s, 6.2 MB/s 00:12:13.708 05:31:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.708 05:31:17 -- common/autotest_common.sh@874 -- # size=4096 00:12:13.708 05:31:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.708 05:31:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:13.708 05:31:17 -- common/autotest_common.sh@877 -- # return 0 00:12:13.708 05:31:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:13.708 05:31:17 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:13.708 05:31:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:12:13.965 /dev/nbd8 00:12:13.965 05:31:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:12:13.965 05:31:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:12:13.965 05:31:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:12:13.965 05:31:17 -- common/autotest_common.sh@857 -- # local i 00:12:13.965 05:31:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:13.965 05:31:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:13.965 05:31:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:12:13.965 05:31:17 -- common/autotest_common.sh@861 -- # break 00:12:13.965 05:31:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:13.965 05:31:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:13.965 05:31:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:13.965 1+0 records in 00:12:13.965 1+0 records out 00:12:13.965 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000941101 s, 4.4 MB/s 00:12:13.965 05:31:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.965 05:31:17 -- common/autotest_common.sh@874 -- # size=4096 00:12:13.965 05:31:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.965 05:31:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:13.965 05:31:17 -- common/autotest_common.sh@877 -- # return 0 00:12:13.965 05:31:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:13.965 05:31:17 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:13.965 05:31:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:12:14.222 /dev/nbd9 00:12:14.222 05:31:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:12:14.222 05:31:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:12:14.222 05:31:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:12:14.222 05:31:18 -- common/autotest_common.sh@857 -- # local i 00:12:14.222 05:31:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:14.222 05:31:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:14.222 05:31:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:12:14.222 05:31:18 -- common/autotest_common.sh@861 -- # break 00:12:14.222 05:31:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:14.222 05:31:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:14.222 05:31:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:14.222 1+0 records in 00:12:14.222 1+0 records out 00:12:14.222 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00139906 s, 2.9 MB/s 00:12:14.222 05:31:18 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.222 05:31:18 -- common/autotest_common.sh@874 -- # size=4096 00:12:14.222 05:31:18 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.222 05:31:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:14.222 05:31:18 -- common/autotest_common.sh@877 -- # return 0 00:12:14.222 05:31:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:14.222 05:31:18 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:14.222 05:31:18 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:14.222 05:31:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:14.222 05:31:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:14.480 05:31:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd0", 00:12:14.480 "bdev_name": "Malloc0" 00:12:14.480 }, 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd1", 00:12:14.480 "bdev_name": "Malloc1p0" 00:12:14.480 }, 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd10", 00:12:14.480 "bdev_name": "Malloc1p1" 00:12:14.480 }, 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd11", 00:12:14.480 "bdev_name": "Malloc2p0" 00:12:14.480 }, 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd12", 00:12:14.480 "bdev_name": "Malloc2p1" 00:12:14.480 }, 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd13", 00:12:14.480 "bdev_name": "Malloc2p2" 00:12:14.480 }, 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd14", 00:12:14.480 "bdev_name": "Malloc2p3" 00:12:14.480 }, 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd15", 00:12:14.480 "bdev_name": "Malloc2p4" 00:12:14.480 }, 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd2", 00:12:14.480 "bdev_name": "Malloc2p5" 00:12:14.480 }, 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd3", 00:12:14.480 "bdev_name": "Malloc2p6" 00:12:14.480 }, 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd4", 00:12:14.480 "bdev_name": "Malloc2p7" 00:12:14.480 }, 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd5", 00:12:14.480 "bdev_name": "TestPT" 00:12:14.480 }, 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd6", 00:12:14.480 "bdev_name": "raid0" 00:12:14.480 }, 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd7", 00:12:14.480 "bdev_name": "concat0" 00:12:14.480 }, 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd8", 00:12:14.480 "bdev_name": "raid1" 00:12:14.480 }, 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd9", 00:12:14.480 "bdev_name": "AIO0" 00:12:14.480 } 00:12:14.480 ]' 00:12:14.480 05:31:18 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd0", 00:12:14.480 "bdev_name": "Malloc0" 00:12:14.480 }, 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd1", 00:12:14.480 "bdev_name": "Malloc1p0" 00:12:14.480 }, 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd10", 00:12:14.480 "bdev_name": "Malloc1p1" 00:12:14.480 }, 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd11", 00:12:14.480 "bdev_name": "Malloc2p0" 00:12:14.480 }, 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd12", 00:12:14.480 "bdev_name": "Malloc2p1" 00:12:14.480 }, 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd13", 00:12:14.480 "bdev_name": "Malloc2p2" 00:12:14.480 }, 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd14", 00:12:14.480 "bdev_name": "Malloc2p3" 00:12:14.480 }, 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd15", 00:12:14.480 "bdev_name": "Malloc2p4" 00:12:14.480 }, 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd2", 00:12:14.480 "bdev_name": "Malloc2p5" 00:12:14.480 }, 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd3", 00:12:14.480 "bdev_name": "Malloc2p6" 00:12:14.480 }, 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd4", 00:12:14.480 "bdev_name": "Malloc2p7" 00:12:14.480 }, 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd5", 00:12:14.480 "bdev_name": "TestPT" 00:12:14.480 }, 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd6", 00:12:14.480 "bdev_name": "raid0" 00:12:14.480 }, 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd7", 00:12:14.480 "bdev_name": "concat0" 00:12:14.480 }, 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd8", 00:12:14.480 "bdev_name": "raid1" 00:12:14.480 }, 00:12:14.480 { 00:12:14.480 "nbd_device": "/dev/nbd9", 00:12:14.480 "bdev_name": "AIO0" 00:12:14.480 } 00:12:14.480 ]' 00:12:14.480 05:31:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:14.480 05:31:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:14.480 /dev/nbd1 00:12:14.480 /dev/nbd10 00:12:14.480 /dev/nbd11 00:12:14.480 /dev/nbd12 00:12:14.480 /dev/nbd13 00:12:14.480 /dev/nbd14 00:12:14.480 /dev/nbd15 00:12:14.480 /dev/nbd2 00:12:14.480 /dev/nbd3 00:12:14.480 /dev/nbd4 00:12:14.480 /dev/nbd5 00:12:14.480 /dev/nbd6 00:12:14.480 /dev/nbd7 00:12:14.480 /dev/nbd8 00:12:14.480 /dev/nbd9' 00:12:14.480 05:31:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:14.480 05:31:18 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:14.480 /dev/nbd1 00:12:14.480 /dev/nbd10 00:12:14.480 /dev/nbd11 00:12:14.480 /dev/nbd12 00:12:14.480 /dev/nbd13 00:12:14.480 /dev/nbd14 00:12:14.480 /dev/nbd15 00:12:14.480 /dev/nbd2 00:12:14.480 /dev/nbd3 00:12:14.480 /dev/nbd4 00:12:14.480 /dev/nbd5 00:12:14.480 /dev/nbd6 00:12:14.480 /dev/nbd7 00:12:14.480 /dev/nbd8 00:12:14.480 /dev/nbd9' 00:12:14.480 05:31:18 -- bdev/nbd_common.sh@65 -- # count=16 00:12:14.480 05:31:18 -- bdev/nbd_common.sh@66 -- # echo 16 00:12:14.758 05:31:18 -- bdev/nbd_common.sh@95 -- # count=16 00:12:14.758 05:31:18 -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:12:14.759 05:31:18 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:12:14.759 05:31:18 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:14.759 05:31:18 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:14.759 05:31:18 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:14.759 05:31:18 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:14.759 05:31:18 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:14.759 05:31:18 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:14.759 256+0 records in 00:12:14.759 256+0 records out 00:12:14.759 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101453 s, 103 MB/s 00:12:14.759 05:31:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:14.759 05:31:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:14.759 256+0 records in 00:12:14.759 256+0 records out 00:12:14.759 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146419 s, 7.2 MB/s 00:12:14.759 05:31:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:14.759 05:31:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:15.026 256+0 records in 00:12:15.026 256+0 records out 00:12:15.026 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134721 s, 7.8 MB/s 00:12:15.026 05:31:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:15.026 05:31:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:15.026 256+0 records in 00:12:15.026 256+0 records out 00:12:15.026 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139667 s, 7.5 MB/s 00:12:15.026 05:31:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:15.026 05:31:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:15.283 256+0 records in 00:12:15.283 256+0 records out 00:12:15.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152904 s, 6.9 MB/s 00:12:15.283 05:31:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:15.283 05:31:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:15.283 256+0 records in 00:12:15.283 256+0 records out 00:12:15.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166379 s, 6.3 MB/s 00:12:15.283 05:31:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:15.283 05:31:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:15.545 256+0 records in 00:12:15.545 256+0 records out 00:12:15.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137273 s, 7.6 MB/s 00:12:15.545 05:31:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:15.545 05:31:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:12:15.545 256+0 records in 00:12:15.545 256+0 records out 00:12:15.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13492 s, 7.8 MB/s 00:12:15.545 05:31:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:15.545 05:31:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:12:15.802 256+0 records in 00:12:15.802 256+0 records out 00:12:15.802 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130301 s, 8.0 MB/s 00:12:15.802 05:31:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:15.802 05:31:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:12:16.060 256+0 records in 00:12:16.060 256+0 records out 00:12:16.060 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143532 s, 7.3 MB/s 00:12:16.060 05:31:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:16.060 05:31:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:12:16.060 256+0 records in 00:12:16.060 256+0 records out 00:12:16.060 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133918 s, 7.8 MB/s 00:12:16.060 05:31:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:16.060 05:31:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:12:16.318 256+0 records in 00:12:16.318 256+0 records out 00:12:16.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134479 s, 7.8 MB/s 00:12:16.318 05:31:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:16.318 05:31:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:12:16.318 256+0 records in 00:12:16.318 256+0 records out 00:12:16.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142387 s, 7.4 MB/s 00:12:16.318 05:31:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:16.318 05:31:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:12:16.576 256+0 records in 00:12:16.576 256+0 records out 00:12:16.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141064 s, 7.4 MB/s 00:12:16.576 05:31:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:16.576 05:31:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:12:16.576 256+0 records in 00:12:16.576 256+0 records out 00:12:16.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144478 s, 7.3 MB/s 00:12:16.576 05:31:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:16.577 05:31:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:12:16.834 256+0 records in 00:12:16.834 256+0 records out 00:12:16.834 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140733 s, 7.5 MB/s 00:12:16.834 05:31:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:16.834 05:31:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:12:17.092 256+0 records in 00:12:17.092 256+0 records out 00:12:17.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.204396 s, 5.1 MB/s 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:17.092 05:31:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:12:17.092 05:31:21 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:17.092 05:31:21 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:17.092 05:31:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:17.092 05:31:21 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:17.092 05:31:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:17.092 05:31:21 -- bdev/nbd_common.sh@51 -- # local i 00:12:17.092 05:31:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:17.092 05:31:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:17.350 05:31:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:17.350 05:31:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:17.350 05:31:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:17.350 05:31:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:17.350 05:31:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:17.350 05:31:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:17.350 05:31:21 -- bdev/nbd_common.sh@41 -- # break 00:12:17.350 05:31:21 -- bdev/nbd_common.sh@45 -- # return 0 00:12:17.350 05:31:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:17.350 05:31:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:17.608 05:31:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:17.608 05:31:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:17.608 05:31:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:17.608 05:31:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:17.608 05:31:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:17.608 05:31:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:17.608 05:31:21 -- bdev/nbd_common.sh@41 -- # break 00:12:17.608 05:31:21 -- bdev/nbd_common.sh@45 -- # return 0 00:12:17.608 05:31:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:17.608 05:31:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:18.174 05:31:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:18.174 05:31:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:18.174 05:31:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:18.174 05:31:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.174 05:31:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.174 05:31:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:18.174 05:31:21 -- bdev/nbd_common.sh@41 -- # break 00:12:18.174 05:31:21 -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.174 05:31:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.174 05:31:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:18.174 05:31:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:18.174 05:31:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:18.174 05:31:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:18.174 05:31:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.174 05:31:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.175 05:31:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:18.175 05:31:22 -- bdev/nbd_common.sh@41 -- # break 00:12:18.175 05:31:22 -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.175 05:31:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.175 05:31:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:18.433 05:31:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:18.433 05:31:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:18.433 05:31:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:18.433 05:31:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.433 05:31:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.433 05:31:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:18.433 05:31:22 -- bdev/nbd_common.sh@41 -- # break 00:12:18.433 05:31:22 -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.433 05:31:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.433 05:31:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:18.999 05:31:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:18.999 05:31:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:18.999 05:31:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:18.999 05:31:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.999 05:31:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.999 05:31:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:18.999 05:31:22 -- bdev/nbd_common.sh@41 -- # break 00:12:18.999 05:31:22 -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.999 05:31:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.999 05:31:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:18.999 05:31:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:18.999 05:31:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:18.999 05:31:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:18.999 05:31:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.999 05:31:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.999 05:31:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:18.999 05:31:22 -- bdev/nbd_common.sh@41 -- # break 00:12:18.999 05:31:22 -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.999 05:31:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.999 05:31:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:19.257 05:31:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:19.257 05:31:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:19.257 05:31:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:19.257 05:31:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:19.257 05:31:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:19.257 05:31:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:19.257 05:31:23 -- bdev/nbd_common.sh@41 -- # break 00:12:19.257 05:31:23 -- bdev/nbd_common.sh@45 -- # return 0 00:12:19.257 05:31:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:19.257 05:31:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:19.514 05:31:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:19.514 05:31:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:19.514 05:31:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:19.514 05:31:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:19.514 05:31:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:19.514 05:31:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:19.514 05:31:23 -- bdev/nbd_common.sh@41 -- # break 00:12:19.514 05:31:23 -- bdev/nbd_common.sh@45 -- # return 0 00:12:19.514 05:31:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:19.514 05:31:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:19.772 05:31:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:19.772 05:31:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:19.772 05:31:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:19.772 05:31:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:19.772 05:31:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:19.772 05:31:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:19.772 05:31:23 -- bdev/nbd_common.sh@41 -- # break 00:12:19.772 05:31:23 -- bdev/nbd_common.sh@45 -- # return 0 00:12:19.772 05:31:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:19.772 05:31:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:20.029 05:31:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:20.029 05:31:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:20.029 05:31:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:20.029 05:31:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:20.029 05:31:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:20.029 05:31:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:20.029 05:31:23 -- bdev/nbd_common.sh@41 -- # break 00:12:20.029 05:31:23 -- bdev/nbd_common.sh@45 -- # return 0 00:12:20.029 05:31:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:20.029 05:31:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:20.287 05:31:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:20.287 05:31:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:20.287 05:31:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:20.287 05:31:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:20.287 05:31:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:20.287 05:31:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:20.287 05:31:24 -- bdev/nbd_common.sh@41 -- # break 00:12:20.287 05:31:24 -- bdev/nbd_common.sh@45 -- # return 0 00:12:20.287 05:31:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:20.287 05:31:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:20.545 05:31:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:20.545 05:31:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:20.545 05:31:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:20.545 05:31:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:20.545 05:31:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:20.545 05:31:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:20.545 05:31:24 -- bdev/nbd_common.sh@41 -- # break 00:12:20.545 05:31:24 -- bdev/nbd_common.sh@45 -- # return 0 00:12:20.545 05:31:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:20.545 05:31:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:20.803 05:31:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:20.803 05:31:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:20.803 05:31:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:20.803 05:31:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:20.803 05:31:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:20.803 05:31:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:20.803 05:31:24 -- bdev/nbd_common.sh@41 -- # break 00:12:20.803 05:31:24 -- bdev/nbd_common.sh@45 -- # return 0 00:12:20.803 05:31:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:20.803 05:31:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:20.803 05:31:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:20.803 05:31:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:20.803 05:31:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:20.803 05:31:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:20.803 05:31:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:20.803 05:31:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:20.803 05:31:24 -- bdev/nbd_common.sh@41 -- # break 00:12:20.803 05:31:24 -- bdev/nbd_common.sh@45 -- # return 0 00:12:20.803 05:31:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:20.803 05:31:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:21.061 05:31:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:21.061 05:31:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:21.061 05:31:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:21.061 05:31:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:21.061 05:31:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:21.061 05:31:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:21.061 05:31:24 -- bdev/nbd_common.sh@41 -- # break 00:12:21.061 05:31:24 -- bdev/nbd_common.sh@45 -- # return 0 00:12:21.061 05:31:24 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:21.061 05:31:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:21.061 05:31:24 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:21.318 05:31:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:21.318 05:31:25 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:21.318 05:31:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:21.576 05:31:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:21.576 05:31:25 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:21.576 05:31:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:21.576 05:31:25 -- bdev/nbd_common.sh@65 -- # true 00:12:21.576 05:31:25 -- bdev/nbd_common.sh@65 -- # count=0 00:12:21.576 05:31:25 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:21.576 05:31:25 -- bdev/nbd_common.sh@104 -- # count=0 00:12:21.576 05:31:25 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:21.576 05:31:25 -- bdev/nbd_common.sh@109 -- # return 0 00:12:21.576 05:31:25 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:21.576 05:31:25 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:21.576 05:31:25 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:21.576 05:31:25 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:21.576 05:31:25 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:21.576 05:31:25 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:21.834 malloc_lvol_verify 00:12:21.834 05:31:25 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:21.834 fb430d78-ce29-4e3b-abd9-4772764ded32 00:12:21.834 05:31:25 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:22.092 2e5d8528-e22d-40a6-8b21-8ac4a3f4f00e 00:12:22.092 05:31:26 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:22.380 /dev/nbd0 00:12:22.380 05:31:26 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:22.380 mke2fs 1.46.5 (30-Dec-2021) 00:12:22.380 00:12:22.380 Filesystem too small for a journal 00:12:22.380 Discarding device blocks: 0/1024 done 00:12:22.380 Creating filesystem with 1024 4k blocks and 1024 inodes 00:12:22.380 00:12:22.380 Allocating group tables: 0/1 done 00:12:22.380 Writing inode tables: 0/1 done 00:12:22.380 Writing superblocks and filesystem accounting information: 0/1 done 00:12:22.380 00:12:22.380 05:31:26 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:22.380 05:31:26 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:22.380 05:31:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:22.380 05:31:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:22.380 05:31:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:22.380 05:31:26 -- bdev/nbd_common.sh@51 -- # local i 00:12:22.380 05:31:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:22.380 05:31:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:22.647 05:31:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:22.647 05:31:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:22.647 05:31:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:22.647 05:31:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:22.647 05:31:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:22.647 05:31:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:22.647 05:31:26 -- bdev/nbd_common.sh@41 -- # break 00:12:22.647 05:31:26 -- bdev/nbd_common.sh@45 -- # return 0 00:12:22.647 05:31:26 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:22.647 05:31:26 -- bdev/nbd_common.sh@147 -- # return 0 00:12:22.647 05:31:26 -- bdev/blockdev.sh@324 -- # killprocess 119508 00:12:22.647 05:31:26 -- common/autotest_common.sh@926 -- # '[' -z 119508 ']' 00:12:22.647 05:31:26 -- common/autotest_common.sh@930 -- # kill -0 119508 00:12:22.647 05:31:26 -- common/autotest_common.sh@931 -- # uname 00:12:22.647 05:31:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:22.647 05:31:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119508 00:12:22.647 05:31:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:22.648 05:31:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:22.648 killing process with pid 119508 00:12:22.648 05:31:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119508' 00:12:22.648 05:31:26 -- common/autotest_common.sh@945 -- # kill 119508 00:12:22.648 05:31:26 -- common/autotest_common.sh@950 -- # wait 119508 00:12:24.548 05:31:28 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:12:24.549 00:12:24.549 real 0m25.758s 00:12:24.549 user 0m35.278s 00:12:24.549 sys 0m9.121s 00:12:24.549 05:31:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:24.549 05:31:28 -- common/autotest_common.sh@10 -- # set +x 00:12:24.549 ************************************ 00:12:24.549 END TEST bdev_nbd 00:12:24.549 ************************************ 00:12:24.549 05:31:28 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:12:24.549 05:31:28 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:12:24.549 05:31:28 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:12:24.549 05:31:28 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:12:24.549 05:31:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:24.549 05:31:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:24.549 05:31:28 -- common/autotest_common.sh@10 -- # set +x 00:12:24.549 ************************************ 00:12:24.549 START TEST bdev_fio 00:12:24.549 ************************************ 00:12:24.549 05:31:28 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:12:24.549 05:31:28 -- bdev/blockdev.sh@329 -- # local env_context 00:12:24.549 05:31:28 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:24.549 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:24.549 05:31:28 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:24.549 05:31:28 -- bdev/blockdev.sh@337 -- # echo '' 00:12:24.549 05:31:28 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:12:24.549 05:31:28 -- bdev/blockdev.sh@337 -- # env_context= 00:12:24.549 05:31:28 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:24.549 05:31:28 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:24.549 05:31:28 -- common/autotest_common.sh@1260 -- # local workload=verify 00:12:24.549 05:31:28 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:12:24.549 05:31:28 -- common/autotest_common.sh@1262 -- # local env_context= 00:12:24.549 05:31:28 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:12:24.549 05:31:28 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:24.549 05:31:28 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:12:24.549 05:31:28 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:12:24.549 05:31:28 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:24.549 05:31:28 -- common/autotest_common.sh@1280 -- # cat 00:12:24.549 05:31:28 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:12:24.549 05:31:28 -- common/autotest_common.sh@1293 -- # cat 00:12:24.549 05:31:28 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:12:24.549 05:31:28 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:12:24.549 05:31:28 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:24.549 05:31:28 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:12:24.549 05:31:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:24.549 05:31:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:12:24.549 05:31:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:12:24.549 05:31:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:24.549 05:31:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:12:24.549 05:31:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:12:24.549 05:31:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:24.549 05:31:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:12:24.549 05:31:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:12:24.549 05:31:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:24.549 05:31:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:12:24.549 05:31:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:12:24.549 05:31:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:24.549 05:31:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:12:24.549 05:31:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:12:24.549 05:31:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:24.549 05:31:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:12:24.549 05:31:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:12:24.549 05:31:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:24.549 05:31:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:12:24.549 05:31:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:12:24.549 05:31:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:24.549 05:31:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:12:24.549 05:31:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:12:24.549 05:31:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:24.549 05:31:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:12:24.549 05:31:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:12:24.549 05:31:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:24.549 05:31:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:12:24.549 05:31:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:12:24.549 05:31:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:24.549 05:31:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:12:24.549 05:31:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:12:24.549 05:31:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:24.549 05:31:28 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:12:24.549 05:31:28 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:12:24.549 05:31:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:24.549 05:31:28 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:12:24.549 05:31:28 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:12:24.549 05:31:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:24.549 05:31:28 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:12:24.549 05:31:28 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:12:24.549 05:31:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:24.549 05:31:28 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:12:24.549 05:31:28 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:12:24.549 05:31:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:24.549 05:31:28 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:12:24.549 05:31:28 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:12:24.549 05:31:28 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:24.549 05:31:28 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:24.549 05:31:28 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:12:24.549 05:31:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:24.549 05:31:28 -- common/autotest_common.sh@10 -- # set +x 00:12:24.549 ************************************ 00:12:24.549 START TEST bdev_fio_rw_verify 00:12:24.549 ************************************ 00:12:24.549 05:31:28 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:24.549 05:31:28 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:24.549 05:31:28 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:12:24.549 05:31:28 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:24.549 05:31:28 -- common/autotest_common.sh@1318 -- # local sanitizers 00:12:24.549 05:31:28 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:24.549 05:31:28 -- common/autotest_common.sh@1320 -- # shift 00:12:24.549 05:31:28 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:12:24.549 05:31:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:12:24.549 05:31:28 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:24.549 05:31:28 -- common/autotest_common.sh@1324 -- # grep libasan 00:12:24.549 05:31:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:12:24.549 05:31:28 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:12:24.549 05:31:28 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:12:24.549 05:31:28 -- common/autotest_common.sh@1326 -- # break 00:12:24.549 05:31:28 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:24.549 05:31:28 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:24.808 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:24.808 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:24.808 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:24.808 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:24.808 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:24.808 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:24.808 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:24.808 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:24.808 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:24.808 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:24.808 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:24.808 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:24.808 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:24.808 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:24.808 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:24.808 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:24.808 fio-3.35 00:12:24.808 Starting 16 threads 00:12:37.043 00:12:37.043 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=121529: Mon Oct 7 05:31:40 2024 00:12:37.043 read: IOPS=60.5k, BW=236MiB/s (248MB/s)(2362MiB/10004msec) 00:12:37.043 slat (nsec): min=1952, max=34072k, avg=50874.88, stdev=519243.51 00:12:37.043 clat (usec): min=9, max=39526, avg=390.43, stdev=1455.04 00:12:37.043 lat (usec): min=28, max=39534, avg=441.31, stdev=1543.84 00:12:37.043 clat percentiles (usec): 00:12:37.043 | 50.000th=[ 225], 99.000th=[ 5014], 99.900th=[20317], 99.990th=[32113], 00:12:37.043 | 99.999th=[39584] 00:12:37.043 write: IOPS=94.5k, BW=369MiB/s (387MB/s)(3663MiB/9922msec); 0 zone resets 00:12:37.043 slat (usec): min=4, max=50807, avg=84.22, stdev=705.59 00:12:37.043 clat (usec): min=10, max=48298, avg=493.03, stdev=1672.08 00:12:37.043 lat (usec): min=39, max=51281, avg=577.26, stdev=1814.21 00:12:37.043 clat percentiles (usec): 00:12:37.043 | 50.000th=[ 281], 99.000th=[10290], 99.900th=[20579], 99.990th=[32637], 00:12:37.043 | 99.999th=[47973] 00:12:37.043 bw ( KiB/s): min=228762, max=606942, per=99.13%, avg=374765.37, stdev=6804.63, samples=304 00:12:37.043 iops : min=57190, max=151735, avg=93690.84, stdev=1701.15, samples=304 00:12:37.043 lat (usec) : 10=0.01%, 20=0.01%, 50=0.43%, 100=5.61%, 250=42.90% 00:12:37.043 lat (usec) : 500=43.06%, 750=3.87%, 1000=1.48% 00:12:37.043 lat (msec) : 2=1.17%, 4=0.22%, 10=0.34%, 20=0.79%, 50=0.13% 00:12:37.043 cpu : usr=52.72%, sys=2.63%, ctx=211358, majf=3, minf=65151 00:12:37.043 IO depths : 1=11.2%, 2=23.5%, 4=52.1%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:37.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.043 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.043 issued rwts: total=604756,937796,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.043 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:37.043 00:12:37.043 Run status group 0 (all jobs): 00:12:37.043 READ: bw=236MiB/s (248MB/s), 236MiB/s-236MiB/s (248MB/s-248MB/s), io=2362MiB (2477MB), run=10004-10004msec 00:12:37.043 WRITE: bw=369MiB/s (387MB/s), 369MiB/s-369MiB/s (387MB/s-387MB/s), io=3663MiB (3841MB), run=9922-9922msec 00:12:38.421 ----------------------------------------------------- 00:12:38.421 Suppressions used: 00:12:38.421 count bytes template 00:12:38.421 16 140 /usr/src/fio/parse.c 00:12:38.421 7191 690336 /usr/src/fio/iolog.c 00:12:38.421 1 904 libcrypto.so 00:12:38.421 ----------------------------------------------------- 00:12:38.421 00:12:38.421 00:12:38.421 real 0m14.002s 00:12:38.421 user 1m30.247s 00:12:38.421 sys 0m5.103s 00:12:38.421 05:31:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:38.421 05:31:42 -- common/autotest_common.sh@10 -- # set +x 00:12:38.421 ************************************ 00:12:38.421 END TEST bdev_fio_rw_verify 00:12:38.421 ************************************ 00:12:38.682 05:31:42 -- bdev/blockdev.sh@348 -- # rm -f 00:12:38.682 05:31:42 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:38.682 05:31:42 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:38.682 05:31:42 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:38.682 05:31:42 -- common/autotest_common.sh@1260 -- # local workload=trim 00:12:38.682 05:31:42 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:12:38.682 05:31:42 -- common/autotest_common.sh@1262 -- # local env_context= 00:12:38.682 05:31:42 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:12:38.682 05:31:42 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:38.682 05:31:42 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:12:38.682 05:31:42 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:12:38.682 05:31:42 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:38.682 05:31:42 -- common/autotest_common.sh@1280 -- # cat 00:12:38.682 05:31:42 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:12:38.682 05:31:42 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:12:38.682 05:31:42 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:12:38.682 05:31:42 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:38.683 05:31:42 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "a9931a18-53b2-48e1-a8d2-1a03fdb7f7ea"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a9931a18-53b2-48e1-a8d2-1a03fdb7f7ea",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "0b646633-1e37-500c-ad55-e7bfcf653e5d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "0b646633-1e37-500c-ad55-e7bfcf653e5d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "9fd7a7fa-2ef8-5f29-8462-2751a965728f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "9fd7a7fa-2ef8-5f29-8462-2751a965728f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "bce5c2d6-c823-5285-aa35-274aae8692d3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bce5c2d6-c823-5285-aa35-274aae8692d3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "dcc3d22d-d66b-508b-8dd8-3d4ef838efac"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "dcc3d22d-d66b-508b-8dd8-3d4ef838efac",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "424d2446-c282-5c64-b042-a58b9ed43e62"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "424d2446-c282-5c64-b042-a58b9ed43e62",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "a7f04ce2-dd89-5790-bb02-9fbd13853d9c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a7f04ce2-dd89-5790-bb02-9fbd13853d9c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "428adfe9-1782-5438-a093-eb786fbbf222"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "428adfe9-1782-5438-a093-eb786fbbf222",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "36981328-bda0-51a2-b9ba-08e6b6ea7690"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "36981328-bda0-51a2-b9ba-08e6b6ea7690",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "6b4681b7-2a04-5959-a144-f9c26e29a3d9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6b4681b7-2a04-5959-a144-f9c26e29a3d9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "92a6dfff-437d-577d-b40f-896dac1e4d97"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "92a6dfff-437d-577d-b40f-896dac1e4d97",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "1650caa1-5031-50f9-8280-b9dedc688535"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "1650caa1-5031-50f9-8280-b9dedc688535",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "e34b9df3-3e02-4c67-880f-224bb7c0d0f1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "e34b9df3-3e02-4c67-880f-224bb7c0d0f1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "e34b9df3-3e02-4c67-880f-224bb7c0d0f1",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "97e631f8-d36a-4051-9c27-88793239decd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "8e6e0a45-324d-4e82-9761-38cb2d632da0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "2fe3a24b-f5d8-4c45-873e-b6d6a55f4357"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2fe3a24b-f5d8-4c45-873e-b6d6a55f4357",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "2fe3a24b-f5d8-4c45-873e-b6d6a55f4357",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "7dadbf68-ae39-4c59-b64c-be22642bcb03",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "ef574f76-a03f-4230-855d-010861fe6799",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "fbb94a2f-ffc9-424d-9815-a6d717d7d234"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fbb94a2f-ffc9-424d-9815-a6d717d7d234",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "fbb94a2f-ffc9-424d-9815-a6d717d7d234",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "7df0b244-89ab-42e8-a7ab-70981b6ad1f5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "3d2decfd-a688-4c2e-8e4f-142451992433",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "57fb395e-21c4-4932-83eb-e7cbb243cf94"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "57fb395e-21c4-4932-83eb-e7cbb243cf94",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:12:38.683 05:31:42 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:12:38.683 Malloc1p0 00:12:38.683 Malloc1p1 00:12:38.683 Malloc2p0 00:12:38.683 Malloc2p1 00:12:38.683 Malloc2p2 00:12:38.683 Malloc2p3 00:12:38.683 Malloc2p4 00:12:38.683 Malloc2p5 00:12:38.683 Malloc2p6 00:12:38.683 Malloc2p7 00:12:38.683 TestPT 00:12:38.683 raid0 00:12:38.683 concat0 ]] 00:12:38.683 05:31:42 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:38.684 05:31:42 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "a9931a18-53b2-48e1-a8d2-1a03fdb7f7ea"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a9931a18-53b2-48e1-a8d2-1a03fdb7f7ea",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "0b646633-1e37-500c-ad55-e7bfcf653e5d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "0b646633-1e37-500c-ad55-e7bfcf653e5d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "9fd7a7fa-2ef8-5f29-8462-2751a965728f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "9fd7a7fa-2ef8-5f29-8462-2751a965728f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "bce5c2d6-c823-5285-aa35-274aae8692d3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bce5c2d6-c823-5285-aa35-274aae8692d3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "dcc3d22d-d66b-508b-8dd8-3d4ef838efac"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "dcc3d22d-d66b-508b-8dd8-3d4ef838efac",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "424d2446-c282-5c64-b042-a58b9ed43e62"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "424d2446-c282-5c64-b042-a58b9ed43e62",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "a7f04ce2-dd89-5790-bb02-9fbd13853d9c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a7f04ce2-dd89-5790-bb02-9fbd13853d9c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "428adfe9-1782-5438-a093-eb786fbbf222"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "428adfe9-1782-5438-a093-eb786fbbf222",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "36981328-bda0-51a2-b9ba-08e6b6ea7690"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "36981328-bda0-51a2-b9ba-08e6b6ea7690",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "6b4681b7-2a04-5959-a144-f9c26e29a3d9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6b4681b7-2a04-5959-a144-f9c26e29a3d9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "92a6dfff-437d-577d-b40f-896dac1e4d97"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "92a6dfff-437d-577d-b40f-896dac1e4d97",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "1650caa1-5031-50f9-8280-b9dedc688535"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "1650caa1-5031-50f9-8280-b9dedc688535",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "e34b9df3-3e02-4c67-880f-224bb7c0d0f1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "e34b9df3-3e02-4c67-880f-224bb7c0d0f1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "e34b9df3-3e02-4c67-880f-224bb7c0d0f1",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "97e631f8-d36a-4051-9c27-88793239decd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "8e6e0a45-324d-4e82-9761-38cb2d632da0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "2fe3a24b-f5d8-4c45-873e-b6d6a55f4357"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2fe3a24b-f5d8-4c45-873e-b6d6a55f4357",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "2fe3a24b-f5d8-4c45-873e-b6d6a55f4357",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "7dadbf68-ae39-4c59-b64c-be22642bcb03",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "ef574f76-a03f-4230-855d-010861fe6799",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "fbb94a2f-ffc9-424d-9815-a6d717d7d234"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fbb94a2f-ffc9-424d-9815-a6d717d7d234",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "fbb94a2f-ffc9-424d-9815-a6d717d7d234",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "7df0b244-89ab-42e8-a7ab-70981b6ad1f5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "3d2decfd-a688-4c2e-8e4f-142451992433",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "57fb395e-21c4-4932-83eb-e7cbb243cf94"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "57fb395e-21c4-4932-83eb-e7cbb243cf94",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:12:38.684 05:31:42 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:38.684 05:31:42 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:12:38.684 05:31:42 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:12:38.684 05:31:42 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:38.684 05:31:42 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:12:38.684 05:31:42 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:12:38.684 05:31:42 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:38.685 05:31:42 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:12:38.685 05:31:42 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:12:38.685 05:31:42 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:38.685 05:31:42 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:12:38.685 05:31:42 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:12:38.685 05:31:42 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:38.685 05:31:42 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:12:38.685 05:31:42 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:12:38.685 05:31:42 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:38.685 05:31:42 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:12:38.685 05:31:42 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:12:38.685 05:31:42 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:38.685 05:31:42 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:12:38.685 05:31:42 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:12:38.685 05:31:42 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:38.685 05:31:42 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:12:38.685 05:31:42 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:12:38.685 05:31:42 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:38.685 05:31:42 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:12:38.685 05:31:42 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:12:38.685 05:31:42 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:38.685 05:31:42 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:12:38.685 05:31:42 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:12:38.685 05:31:42 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:38.685 05:31:42 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:12:38.685 05:31:42 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:12:38.685 05:31:42 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:38.685 05:31:42 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:12:38.685 05:31:42 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:12:38.685 05:31:42 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:38.685 05:31:42 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:12:38.685 05:31:42 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:12:38.685 05:31:42 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:38.685 05:31:42 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:12:38.685 05:31:42 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:12:38.685 05:31:42 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:38.685 05:31:42 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:12:38.685 05:31:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:38.685 05:31:42 -- common/autotest_common.sh@10 -- # set +x 00:12:38.685 ************************************ 00:12:38.685 START TEST bdev_fio_trim 00:12:38.685 ************************************ 00:12:38.685 05:31:42 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:38.685 05:31:42 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:38.685 05:31:42 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:12:38.685 05:31:42 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:38.685 05:31:42 -- common/autotest_common.sh@1318 -- # local sanitizers 00:12:38.685 05:31:42 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:38.685 05:31:42 -- common/autotest_common.sh@1320 -- # shift 00:12:38.685 05:31:42 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:12:38.685 05:31:42 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:12:38.685 05:31:42 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:38.685 05:31:42 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:12:38.685 05:31:42 -- common/autotest_common.sh@1324 -- # grep libasan 00:12:38.685 05:31:42 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:12:38.685 05:31:42 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:12:38.685 05:31:42 -- common/autotest_common.sh@1326 -- # break 00:12:38.685 05:31:42 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:38.685 05:31:42 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:38.944 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:38.944 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:38.944 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:38.944 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:38.944 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:38.944 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:38.944 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:38.944 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:38.944 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:38.944 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:38.944 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:38.944 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:38.944 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:38.944 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:38.944 fio-3.35 00:12:38.944 Starting 14 threads 00:12:51.146 00:12:51.146 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=124667: Mon Oct 7 05:31:54 2024 00:12:51.146 write: IOPS=116k, BW=454MiB/s (476MB/s)(4537MiB/10001msec); 0 zone resets 00:12:51.146 slat (usec): min=2, max=36039, avg=44.63, stdev=441.35 00:12:51.146 clat (usec): min=23, max=48275, avg=292.51, stdev=1153.91 00:12:51.146 lat (usec): min=32, max=48315, avg=337.15, stdev=1234.51 00:12:51.146 clat percentiles (usec): 00:12:51.146 | 50.000th=[ 194], 99.000th=[ 742], 99.900th=[16319], 99.990th=[24249], 00:12:51.146 | 99.999th=[32113] 00:12:51.146 bw ( KiB/s): min=316866, max=663749, per=99.96%, avg=464313.68, stdev=8487.15, samples=266 00:12:51.146 iops : min=79216, max=165937, avg=116078.11, stdev=2121.79, samples=266 00:12:51.146 trim: IOPS=116k, BW=454MiB/s (476MB/s)(4537MiB/10001msec); 0 zone resets 00:12:51.146 slat (usec): min=4, max=48029, avg=30.22, stdev=372.51 00:12:51.146 clat (usec): min=3, max=48316, avg=332.11, stdev=1219.01 00:12:51.146 lat (usec): min=14, max=48337, avg=362.33, stdev=1274.09 00:12:51.146 clat percentiles (usec): 00:12:51.146 | 50.000th=[ 223], 99.000th=[ 930], 99.900th=[16319], 99.990th=[24249], 00:12:51.146 | 99.999th=[32375] 00:12:51.146 bw ( KiB/s): min=316866, max=663749, per=99.96%, avg=464314.11, stdev=8487.66, samples=266 00:12:51.146 iops : min=79216, max=165937, avg=116078.21, stdev=2121.92, samples=266 00:12:51.146 lat (usec) : 4=0.01%, 10=0.01%, 20=0.01%, 50=0.39%, 100=4.83% 00:12:51.146 lat (usec) : 250=62.46%, 500=30.56%, 750=0.63%, 1000=0.21% 00:12:51.146 lat (msec) : 2=0.16%, 4=0.05%, 10=0.13%, 20=0.52%, 50=0.04% 00:12:51.146 cpu : usr=66.32%, sys=0.31%, ctx=162083, majf=0, minf=720 00:12:51.146 IO depths : 1=12.4%, 2=24.9%, 4=50.1%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:51.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:51.146 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:51.146 issued rwts: total=0,1161409,1161412,0 short=0,0,0,0 dropped=0,0,0,0 00:12:51.146 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:51.146 00:12:51.146 Run status group 0 (all jobs): 00:12:51.146 WRITE: bw=454MiB/s (476MB/s), 454MiB/s-454MiB/s (476MB/s-476MB/s), io=4537MiB (4757MB), run=10001-10001msec 00:12:51.146 TRIM: bw=454MiB/s (476MB/s), 454MiB/s-454MiB/s (476MB/s-476MB/s), io=4537MiB (4757MB), run=10001-10001msec 00:12:52.520 ----------------------------------------------------- 00:12:52.520 Suppressions used: 00:12:52.520 count bytes template 00:12:52.520 14 129 /usr/src/fio/parse.c 00:12:52.520 1 904 libcrypto.so 00:12:52.520 ----------------------------------------------------- 00:12:52.520 00:12:52.520 00:12:52.520 real 0m13.542s 00:12:52.520 user 1m37.861s 00:12:52.520 sys 0m1.081s 00:12:52.521 05:31:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:52.521 05:31:56 -- common/autotest_common.sh@10 -- # set +x 00:12:52.521 ************************************ 00:12:52.521 END TEST bdev_fio_trim 00:12:52.521 ************************************ 00:12:52.521 05:31:56 -- bdev/blockdev.sh@366 -- # rm -f 00:12:52.521 05:31:56 -- bdev/blockdev.sh@367 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:52.521 /home/vagrant/spdk_repo/spdk 00:12:52.521 05:31:56 -- bdev/blockdev.sh@368 -- # popd 00:12:52.521 05:31:56 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:12:52.521 00:12:52.521 real 0m27.891s 00:12:52.521 user 3m8.328s 00:12:52.521 sys 0m6.290s 00:12:52.521 05:31:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:52.521 ************************************ 00:12:52.521 END TEST bdev_fio 00:12:52.521 ************************************ 00:12:52.521 05:31:56 -- common/autotest_common.sh@10 -- # set +x 00:12:52.521 05:31:56 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:52.521 05:31:56 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:52.521 05:31:56 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:12:52.521 05:31:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:52.521 05:31:56 -- common/autotest_common.sh@10 -- # set +x 00:12:52.521 ************************************ 00:12:52.521 START TEST bdev_verify 00:12:52.521 ************************************ 00:12:52.521 05:31:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:52.521 [2024-10-07 05:31:56.301941] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:52.521 [2024-10-07 05:31:56.302122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126804 ] 00:12:52.521 [2024-10-07 05:31:56.492624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:52.781 [2024-10-07 05:31:56.711904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.781 [2024-10-07 05:31:56.711912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.348 [2024-10-07 05:31:57.034345] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:53.348 [2024-10-07 05:31:57.034438] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:53.348 [2024-10-07 05:31:57.042299] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:53.348 [2024-10-07 05:31:57.042372] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:53.348 [2024-10-07 05:31:57.050330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:53.348 [2024-10-07 05:31:57.050381] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:53.348 [2024-10-07 05:31:57.050429] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:53.348 [2024-10-07 05:31:57.229101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:53.348 [2024-10-07 05:31:57.229244] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.348 [2024-10-07 05:31:57.229323] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:12:53.348 [2024-10-07 05:31:57.229357] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.348 [2024-10-07 05:31:57.231695] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.348 [2024-10-07 05:31:57.231743] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:53.915 Running I/O for 5 seconds... 00:12:59.181 00:12:59.181 Latency(us) 00:12:59.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.181 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:59.181 Verification LBA range: start 0x0 length 0x1000 00:12:59.181 Malloc0 : 5.23 1312.08 5.13 0.00 0.00 96973.18 2129.92 280255.77 00:12:59.181 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:59.181 Verification LBA range: start 0x1000 length 0x1000 00:12:59.181 Malloc0 : 5.25 1211.64 4.73 0.00 0.00 104843.58 2606.55 369861.35 00:12:59.181 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:59.181 Verification LBA range: start 0x0 length 0x800 00:12:59.181 Malloc1p0 : 5.23 921.38 3.60 0.00 0.00 137882.34 4706.68 166818.91 00:12:59.181 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:59.181 Verification LBA range: start 0x800 length 0x800 00:12:59.181 Malloc1p0 : 5.25 869.24 3.40 0.00 0.00 146064.08 4855.62 167772.16 00:12:59.181 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:59.181 Verification LBA range: start 0x0 length 0x800 00:12:59.181 Malloc1p1 : 5.23 921.16 3.60 0.00 0.00 137636.79 5034.36 161099.40 00:12:59.181 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:59.181 Verification LBA range: start 0x800 length 0x800 00:12:59.181 Malloc1p1 : 5.25 868.65 3.39 0.00 0.00 145858.08 5153.51 163005.91 00:12:59.181 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:59.181 Verification LBA range: start 0x0 length 0x200 00:12:59.181 Malloc2p0 : 5.24 920.91 3.60 0.00 0.00 137368.78 5183.30 156333.15 00:12:59.181 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:59.181 Verification LBA range: start 0x200 length 0x200 00:12:59.181 Malloc2p0 : 5.26 868.06 3.39 0.00 0.00 145649.84 5034.36 158239.65 00:12:59.181 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:59.182 Verification LBA range: start 0x0 length 0x200 00:12:59.182 Malloc2p1 : 5.24 920.68 3.60 0.00 0.00 137126.87 4736.47 151566.89 00:12:59.182 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:59.182 Verification LBA range: start 0x200 length 0x200 00:12:59.182 Malloc2p1 : 5.26 867.39 3.39 0.00 0.00 145454.76 4647.10 154426.65 00:12:59.182 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:59.182 Verification LBA range: start 0x0 length 0x200 00:12:59.182 Malloc2p2 : 5.24 920.47 3.60 0.00 0.00 136881.27 4587.52 146800.64 00:12:59.182 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:59.182 Verification LBA range: start 0x200 length 0x200 00:12:59.182 Malloc2p2 : 5.26 867.22 3.39 0.00 0.00 145263.75 4527.94 150613.64 00:12:59.182 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:59.182 Verification LBA range: start 0x0 length 0x200 00:12:59.182 Malloc2p3 : 5.24 920.24 3.59 0.00 0.00 136710.39 4706.68 142987.64 00:12:59.182 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:59.182 Verification LBA range: start 0x200 length 0x200 00:12:59.182 Malloc2p3 : 5.26 867.04 3.39 0.00 0.00 145057.95 4498.15 145847.39 00:12:59.182 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:59.182 Verification LBA range: start 0x0 length 0x200 00:12:59.182 Malloc2p4 : 5.24 920.00 3.59 0.00 0.00 136464.81 4766.25 138221.38 00:12:59.182 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:59.182 Verification LBA range: start 0x200 length 0x200 00:12:59.182 Malloc2p4 : 5.26 866.87 3.39 0.00 0.00 144849.21 4438.57 142034.39 00:12:59.182 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:59.182 Verification LBA range: start 0x0 length 0x200 00:12:59.182 Malloc2p5 : 5.24 919.78 3.59 0.00 0.00 136255.35 4587.52 134408.38 00:12:59.182 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:59.182 Verification LBA range: start 0x200 length 0x200 00:12:59.182 Malloc2p5 : 5.27 866.69 3.39 0.00 0.00 144671.74 4468.36 139174.63 00:12:59.182 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:59.182 Verification LBA range: start 0x0 length 0x200 00:12:59.182 Malloc2p6 : 5.24 919.55 3.59 0.00 0.00 136051.71 4706.68 130595.37 00:12:59.182 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:59.182 Verification LBA range: start 0x200 length 0x200 00:12:59.182 Malloc2p6 : 5.27 866.51 3.38 0.00 0.00 144475.30 4468.36 135361.63 00:12:59.182 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:59.182 Verification LBA range: start 0x0 length 0x200 00:12:59.182 Malloc2p7 : 5.25 919.33 3.59 0.00 0.00 135841.75 4051.32 128688.87 00:12:59.182 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:59.182 Verification LBA range: start 0x200 length 0x200 00:12:59.182 Malloc2p7 : 5.27 866.35 3.38 0.00 0.00 144251.98 4498.15 131548.63 00:12:59.182 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:59.182 Verification LBA range: start 0x0 length 0x1000 00:12:59.182 TestPT : 5.25 919.08 3.59 0.00 0.00 135572.37 5064.15 118679.74 00:12:59.182 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:59.182 Verification LBA range: start 0x1000 length 0x1000 00:12:59.182 TestPT : 5.27 837.33 3.27 0.00 0.00 148950.13 6225.92 222107.46 00:12:59.182 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:59.182 Verification LBA range: start 0x0 length 0x2000 00:12:59.182 raid0 : 5.25 918.43 3.59 0.00 0.00 135329.18 5064.15 113913.48 00:12:59.182 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:59.182 Verification LBA range: start 0x2000 length 0x2000 00:12:59.182 raid0 : 5.27 866.00 3.38 0.00 0.00 143754.18 4557.73 115819.99 00:12:59.182 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:59.182 Verification LBA range: start 0x0 length 0x2000 00:12:59.182 concat0 : 5.25 917.99 3.59 0.00 0.00 135108.61 4796.04 109147.23 00:12:59.182 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:59.182 Verification LBA range: start 0x2000 length 0x2000 00:12:59.182 concat0 : 5.27 865.82 3.38 0.00 0.00 143507.93 4676.89 111053.73 00:12:59.182 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:59.182 Verification LBA range: start 0x0 length 0x1000 00:12:59.182 raid1 : 5.25 917.61 3.58 0.00 0.00 134896.01 3351.27 105334.23 00:12:59.182 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:59.182 Verification LBA range: start 0x1000 length 0x1000 00:12:59.182 raid1 : 5.27 865.65 3.38 0.00 0.00 143262.16 5540.77 110100.48 00:12:59.182 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:59.182 Verification LBA range: start 0x0 length 0x4e2 00:12:59.182 AIO0 : 5.26 916.74 3.58 0.00 0.00 134695.21 10247.45 103904.35 00:12:59.182 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:59.182 Verification LBA range: start 0x4e2 length 0x4e2 00:12:59.182 AIO0 : 5.27 865.49 3.38 0.00 0.00 142819.78 13166.78 109623.85 00:12:59.182 =================================================================================================================== 00:12:59.182 Total : 29291.40 114.42 0.00 0.00 137050.05 2129.92 369861.35 00:13:01.082 00:13:01.082 real 0m8.410s 00:13:01.082 user 0m14.679s 00:13:01.082 sys 0m0.619s 00:13:01.082 05:32:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:01.082 05:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:01.082 ************************************ 00:13:01.082 END TEST bdev_verify 00:13:01.082 ************************************ 00:13:01.082 05:32:04 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:01.082 05:32:04 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:13:01.082 05:32:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:01.082 05:32:04 -- common/autotest_common.sh@10 -- # set +x 00:13:01.082 ************************************ 00:13:01.082 START TEST bdev_verify_big_io 00:13:01.082 ************************************ 00:13:01.082 05:32:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:01.082 [2024-10-07 05:32:04.750742] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:01.082 [2024-10-07 05:32:04.751736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127240 ] 00:13:01.082 [2024-10-07 05:32:04.921521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:01.340 [2024-10-07 05:32:05.077261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.340 [2024-10-07 05:32:05.077268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.599 [2024-10-07 05:32:05.401735] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:01.599 [2024-10-07 05:32:05.401827] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:01.599 [2024-10-07 05:32:05.409730] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:01.599 [2024-10-07 05:32:05.409803] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:01.599 [2024-10-07 05:32:05.417749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:01.599 [2024-10-07 05:32:05.417801] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:01.599 [2024-10-07 05:32:05.417833] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:01.857 [2024-10-07 05:32:05.597653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:01.857 [2024-10-07 05:32:05.597808] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.857 [2024-10-07 05:32:05.597876] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:13:01.857 [2024-10-07 05:32:05.597900] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.857 [2024-10-07 05:32:05.600309] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.857 [2024-10-07 05:32:05.600365] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:02.115 [2024-10-07 05:32:05.926749] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:02.115 [2024-10-07 05:32:05.929509] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:02.115 [2024-10-07 05:32:05.932787] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:02.115 [2024-10-07 05:32:05.936005] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:02.115 [2024-10-07 05:32:05.938749] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:02.115 [2024-10-07 05:32:05.941923] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:02.115 [2024-10-07 05:32:05.944647] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:02.115 [2024-10-07 05:32:05.947852] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:02.115 [2024-10-07 05:32:05.950614] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:02.115 [2024-10-07 05:32:05.953816] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:02.115 [2024-10-07 05:32:05.956491] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:02.115 [2024-10-07 05:32:05.959682] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:02.115 [2024-10-07 05:32:05.962410] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:02.115 [2024-10-07 05:32:05.965654] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:02.115 [2024-10-07 05:32:05.969041] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:02.115 [2024-10-07 05:32:05.971870] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:02.115 [2024-10-07 05:32:06.041099] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:02.115 [2024-10-07 05:32:06.046705] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:02.115 Running I/O for 5 seconds... 00:13:08.675 00:13:08.675 Latency(us) 00:13:08.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.675 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:08.675 Verification LBA range: start 0x0 length 0x100 00:13:08.675 Malloc0 : 5.48 504.87 31.55 0.00 0.00 247691.95 17396.83 720657.69 00:13:08.675 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:08.675 Verification LBA range: start 0x100 length 0x100 00:13:08.675 Malloc0 : 5.56 476.45 29.78 0.00 0.00 264329.13 14477.50 846486.81 00:13:08.675 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:08.675 Verification LBA range: start 0x0 length 0x80 00:13:08.675 Malloc1p0 : 5.60 241.84 15.12 0.00 0.00 506816.07 34078.72 865551.83 00:13:08.675 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:08.675 Verification LBA range: start 0x80 length 0x80 00:13:08.675 Malloc1p0 : 5.57 353.47 22.09 0.00 0.00 353421.44 29908.25 766413.73 00:13:08.675 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:08.675 Verification LBA range: start 0x0 length 0x80 00:13:08.675 Malloc1p1 : 5.67 156.79 9.80 0.00 0.00 773210.33 32648.84 1548079.48 00:13:08.675 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:08.675 Verification LBA range: start 0x80 length 0x80 00:13:08.675 Malloc1p1 : 5.64 157.49 9.84 0.00 0.00 783587.33 29789.09 1624339.55 00:13:08.675 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:08.675 Verification LBA range: start 0x0 length 0x20 00:13:08.675 Malloc2p0 : 5.56 92.66 5.79 0.00 0.00 327611.98 5719.51 568137.54 00:13:08.675 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:08.675 Verification LBA range: start 0x20 length 0x20 00:13:08.675 Malloc2p0 : 5.57 88.99 5.56 0.00 0.00 343441.73 5689.72 484251.46 00:13:08.675 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:08.675 Verification LBA range: start 0x0 length 0x20 00:13:08.675 Malloc2p1 : 5.56 92.64 5.79 0.00 0.00 326600.91 5510.98 556698.53 00:13:08.675 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:08.675 Verification LBA range: start 0x20 length 0x20 00:13:08.675 Malloc2p1 : 5.58 88.96 5.56 0.00 0.00 342506.68 5451.40 476625.45 00:13:08.675 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:08.675 Verification LBA range: start 0x0 length 0x20 00:13:08.675 Malloc2p2 : 5.56 92.62 5.79 0.00 0.00 325540.77 5689.72 545259.52 00:13:08.675 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:08.675 Verification LBA range: start 0x20 length 0x20 00:13:08.675 Malloc2p2 : 5.58 88.93 5.56 0.00 0.00 341605.32 5153.51 467092.95 00:13:08.675 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:08.675 Verification LBA range: start 0x0 length 0x20 00:13:08.675 Malloc2p3 : 5.56 92.59 5.79 0.00 0.00 324519.90 6255.71 533820.51 00:13:08.675 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:08.675 Verification LBA range: start 0x20 length 0x20 00:13:08.675 Malloc2p3 : 5.58 88.91 5.56 0.00 0.00 340612.31 5272.67 457560.44 00:13:08.675 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:08.675 Verification LBA range: start 0x0 length 0x20 00:13:08.675 Malloc2p4 : 5.56 92.54 5.78 0.00 0.00 323474.50 6166.34 518568.49 00:13:08.675 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:08.675 Verification LBA range: start 0x20 length 0x20 00:13:08.675 Malloc2p4 : 5.58 88.88 5.56 0.00 0.00 339745.39 6076.97 448027.93 00:13:08.675 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:08.675 Verification LBA range: start 0x0 length 0x20 00:13:08.675 Malloc2p5 : 5.57 92.52 5.78 0.00 0.00 322486.86 6940.86 507129.48 00:13:08.675 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:08.675 Verification LBA range: start 0x20 length 0x20 00:13:08.675 Malloc2p5 : 5.58 88.86 5.55 0.00 0.00 338762.26 5421.61 436588.92 00:13:08.675 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:08.675 Verification LBA range: start 0x0 length 0x20 00:13:08.675 Malloc2p6 : 5.57 92.50 5.78 0.00 0.00 321244.57 6285.50 491877.47 00:13:08.675 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:08.675 Verification LBA range: start 0x20 length 0x20 00:13:08.676 Malloc2p6 : 5.58 88.83 5.55 0.00 0.00 337854.90 5481.19 427056.41 00:13:08.676 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:08.676 Verification LBA range: start 0x0 length 0x20 00:13:08.676 Malloc2p7 : 5.57 92.45 5.78 0.00 0.00 320189.95 5928.03 478531.96 00:13:08.676 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:08.676 Verification LBA range: start 0x20 length 0x20 00:13:08.676 Malloc2p7 : 5.59 88.80 5.55 0.00 0.00 336944.17 5332.25 417523.90 00:13:08.676 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:08.676 Verification LBA range: start 0x0 length 0x100 00:13:08.676 TestPT : 5.71 161.44 10.09 0.00 0.00 722208.79 35031.97 1540453.47 00:13:08.676 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:08.676 Verification LBA range: start 0x100 length 0x100 00:13:08.676 TestPT : 5.70 151.87 9.49 0.00 0.00 778790.01 38606.66 1670095.59 00:13:08.676 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:08.676 Verification LBA range: start 0x0 length 0x200 00:13:08.676 raid0 : 5.65 169.17 10.57 0.00 0.00 687524.37 31933.91 1548079.48 00:13:08.676 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:08.676 Verification LBA range: start 0x200 length 0x200 00:13:08.676 raid0 : 5.65 157.36 9.84 0.00 0.00 749921.10 32410.53 1601461.53 00:13:08.676 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:08.676 Verification LBA range: start 0x0 length 0x200 00:13:08.676 concat0 : 5.72 172.62 10.79 0.00 0.00 662105.43 27405.96 1563331.49 00:13:08.676 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:08.676 Verification LBA range: start 0x200 length 0x200 00:13:08.676 concat0 : 5.69 162.08 10.13 0.00 0.00 717265.22 30980.65 1609087.53 00:13:08.676 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:08.676 Verification LBA range: start 0x0 length 0x100 00:13:08.676 raid1 : 5.69 178.60 11.16 0.00 0.00 633870.91 17754.30 1570957.50 00:13:08.676 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:08.676 Verification LBA range: start 0x100 length 0x100 00:13:08.676 raid1 : 5.69 162.00 10.12 0.00 0.00 707203.87 33125.47 1616713.54 00:13:08.676 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:13:08.676 Verification LBA range: start 0x0 length 0x4e 00:13:08.676 AIO0 : 5.72 193.83 12.11 0.00 0.00 353188.53 1802.24 907494.87 00:13:08.676 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:13:08.676 Verification LBA range: start 0x4e length 0x4e 00:13:08.676 AIO0 : 5.63 151.78 9.49 0.00 0.00 457290.28 8460.10 880803.84 00:13:08.676 =================================================================================================================== 00:13:08.676 Total : 5003.34 312.71 0.00 0.00 459694.20 1802.24 1670095.59 00:13:08.676 [2024-10-07 05:32:12.629706] thread.c:2244:spdk_io_device_unregister: *WARNING*: io_device bdev_Malloc3 (0x616000009681) has 74 for_each calls outstanding 00:13:10.080 00:13:10.080 real 0m9.181s 00:13:10.080 user 0m16.655s 00:13:10.080 sys 0m0.593s 00:13:10.080 05:32:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:10.080 05:32:13 -- common/autotest_common.sh@10 -- # set +x 00:13:10.080 ************************************ 00:13:10.080 END TEST bdev_verify_big_io 00:13:10.080 ************************************ 00:13:10.080 05:32:13 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:10.080 05:32:13 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:10.080 05:32:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:10.080 05:32:13 -- common/autotest_common.sh@10 -- # set +x 00:13:10.080 ************************************ 00:13:10.080 START TEST bdev_write_zeroes 00:13:10.080 ************************************ 00:13:10.080 05:32:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:10.080 [2024-10-07 05:32:13.983254] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:10.080 [2024-10-07 05:32:13.983385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127546 ] 00:13:10.337 [2024-10-07 05:32:14.130273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.337 [2024-10-07 05:32:14.308024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.902 [2024-10-07 05:32:14.638722] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:10.902 [2024-10-07 05:32:14.638833] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:10.902 [2024-10-07 05:32:14.646696] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:10.902 [2024-10-07 05:32:14.646769] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:10.902 [2024-10-07 05:32:14.654718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:10.902 [2024-10-07 05:32:14.654768] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:10.902 [2024-10-07 05:32:14.654797] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:10.902 [2024-10-07 05:32:14.828041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:10.902 [2024-10-07 05:32:14.828201] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.902 [2024-10-07 05:32:14.828256] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:13:10.903 [2024-10-07 05:32:14.828285] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.903 [2024-10-07 05:32:14.830652] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.903 [2024-10-07 05:32:14.830707] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:11.468 Running I/O for 1 seconds... 00:13:12.410 00:13:12.410 Latency(us) 00:13:12.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.410 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:12.410 Malloc0 : 1.04 6408.20 25.03 0.00 0.00 19964.34 636.74 35031.97 00:13:12.410 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:12.410 Malloc1p0 : 1.04 6401.68 25.01 0.00 0.00 19959.66 781.96 34317.03 00:13:12.410 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:12.410 Malloc1p1 : 1.04 6395.22 24.98 0.00 0.00 19948.16 781.96 33602.09 00:13:12.410 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:12.410 Malloc2p0 : 1.04 6388.89 24.96 0.00 0.00 19929.72 781.96 32887.16 00:13:12.410 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:12.410 Malloc2p1 : 1.04 6382.17 24.93 0.00 0.00 19911.05 767.07 31933.91 00:13:12.410 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:12.410 Malloc2p2 : 1.04 6375.83 24.91 0.00 0.00 19891.88 741.00 31218.97 00:13:12.410 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:12.410 Malloc2p3 : 1.05 6369.01 24.88 0.00 0.00 19880.88 785.69 30384.87 00:13:12.410 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:12.410 Malloc2p4 : 1.05 6362.69 24.85 0.00 0.00 19864.77 778.24 29550.78 00:13:12.410 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:12.410 Malloc2p5 : 1.05 6356.40 24.83 0.00 0.00 19853.76 759.62 28835.84 00:13:12.410 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:12.410 Malloc2p6 : 1.05 6350.04 24.80 0.00 0.00 19845.52 815.48 28001.75 00:13:12.410 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:12.410 Malloc2p7 : 1.05 6343.77 24.78 0.00 0.00 19826.90 830.37 27167.65 00:13:12.410 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:12.410 TestPT : 1.05 6337.27 24.75 0.00 0.00 19807.99 848.99 26333.56 00:13:12.410 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:12.410 raid0 : 1.05 6329.80 24.73 0.00 0.00 19785.33 1333.06 25022.84 00:13:12.410 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:12.410 concat0 : 1.05 6322.35 24.70 0.00 0.00 19752.16 1288.38 23712.12 00:13:12.410 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:12.410 raid1 : 1.05 6313.25 24.66 0.00 0.00 19713.94 2100.13 21686.46 00:13:12.410 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:12.410 AIO0 : 1.06 6291.58 24.58 0.00 0.00 19696.48 1295.83 21448.15 00:13:12.410 =================================================================================================================== 00:13:12.410 Total : 101728.16 397.38 0.00 0.00 19852.06 636.74 35031.97 00:13:14.311 00:13:14.311 real 0m4.257s 00:13:14.311 user 0m3.646s 00:13:14.311 sys 0m0.396s 00:13:14.311 05:32:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:14.311 05:32:18 -- common/autotest_common.sh@10 -- # set +x 00:13:14.311 ************************************ 00:13:14.311 END TEST bdev_write_zeroes 00:13:14.311 ************************************ 00:13:14.311 05:32:18 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:14.311 05:32:18 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:14.311 05:32:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:14.311 05:32:18 -- common/autotest_common.sh@10 -- # set +x 00:13:14.311 ************************************ 00:13:14.311 START TEST bdev_json_nonenclosed 00:13:14.311 ************************************ 00:13:14.311 05:32:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:14.573 [2024-10-07 05:32:18.317760] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:14.573 [2024-10-07 05:32:18.317975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127720 ] 00:13:14.573 [2024-10-07 05:32:18.486952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.832 [2024-10-07 05:32:18.715965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.832 [2024-10-07 05:32:18.716170] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:14.832 [2024-10-07 05:32:18.716213] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:15.090 ************************************ 00:13:15.090 END TEST bdev_json_nonenclosed 00:13:15.090 ************************************ 00:13:15.090 00:13:15.090 real 0m0.794s 00:13:15.090 user 0m0.537s 00:13:15.090 sys 0m0.157s 00:13:15.090 05:32:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:15.090 05:32:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.349 05:32:19 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:15.349 05:32:19 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:15.349 05:32:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:15.349 05:32:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.349 ************************************ 00:13:15.349 START TEST bdev_json_nonarray 00:13:15.349 ************************************ 00:13:15.349 05:32:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:15.349 [2024-10-07 05:32:19.149309] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:15.349 [2024-10-07 05:32:19.149505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127790 ] 00:13:15.349 [2024-10-07 05:32:19.318765] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.608 [2024-10-07 05:32:19.476138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.608 [2024-10-07 05:32:19.476346] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:15.608 [2024-10-07 05:32:19.476385] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:15.866 00:13:15.866 real 0m0.696s 00:13:15.866 user 0m0.461s 00:13:15.866 sys 0m0.132s 00:13:15.866 ************************************ 00:13:15.866 END TEST bdev_json_nonarray 00:13:15.866 ************************************ 00:13:15.866 05:32:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:15.866 05:32:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.866 05:32:19 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:13:15.866 05:32:19 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:13:15.866 05:32:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:15.866 05:32:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:15.866 05:32:19 -- common/autotest_common.sh@10 -- # set +x 00:13:16.125 ************************************ 00:13:16.125 START TEST bdev_qos 00:13:16.125 ************************************ 00:13:16.125 05:32:19 -- common/autotest_common.sh@1104 -- # qos_test_suite '' 00:13:16.125 05:32:19 -- bdev/blockdev.sh@444 -- # QOS_PID=127822 00:13:16.125 Process qos testing pid: 127822 00:13:16.125 05:32:19 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 127822' 00:13:16.125 05:32:19 -- bdev/blockdev.sh@443 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:13:16.125 05:32:19 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:13:16.125 05:32:19 -- bdev/blockdev.sh@447 -- # waitforlisten 127822 00:13:16.125 05:32:19 -- common/autotest_common.sh@819 -- # '[' -z 127822 ']' 00:13:16.125 05:32:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.125 05:32:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:16.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.125 05:32:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.125 05:32:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:16.125 05:32:19 -- common/autotest_common.sh@10 -- # set +x 00:13:16.125 [2024-10-07 05:32:19.916193] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:16.125 [2024-10-07 05:32:19.916416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127822 ] 00:13:16.125 [2024-10-07 05:32:20.086304] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.388 [2024-10-07 05:32:20.309588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.988 05:32:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:16.988 05:32:20 -- common/autotest_common.sh@852 -- # return 0 00:13:16.988 05:32:20 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:13:16.988 05:32:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.988 05:32:20 -- common/autotest_common.sh@10 -- # set +x 00:13:17.247 Malloc_0 00:13:17.247 05:32:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.247 05:32:20 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:13:17.247 05:32:20 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_0 00:13:17.247 05:32:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:17.247 05:32:20 -- common/autotest_common.sh@889 -- # local i 00:13:17.247 05:32:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:17.247 05:32:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:17.247 05:32:20 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:17.247 05:32:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.247 05:32:20 -- common/autotest_common.sh@10 -- # set +x 00:13:17.247 05:32:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.247 05:32:20 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:13:17.247 05:32:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.247 05:32:20 -- common/autotest_common.sh@10 -- # set +x 00:13:17.247 [ 00:13:17.247 { 00:13:17.247 "name": "Malloc_0", 00:13:17.247 "aliases": [ 00:13:17.247 "dbbd28dd-3577-4cb2-b968-2b99eb084497" 00:13:17.247 ], 00:13:17.247 "product_name": "Malloc disk", 00:13:17.247 "block_size": 512, 00:13:17.247 "num_blocks": 262144, 00:13:17.247 "uuid": "dbbd28dd-3577-4cb2-b968-2b99eb084497", 00:13:17.247 "assigned_rate_limits": { 00:13:17.247 "rw_ios_per_sec": 0, 00:13:17.247 "rw_mbytes_per_sec": 0, 00:13:17.247 "r_mbytes_per_sec": 0, 00:13:17.247 "w_mbytes_per_sec": 0 00:13:17.247 }, 00:13:17.247 "claimed": false, 00:13:17.247 "zoned": false, 00:13:17.247 "supported_io_types": { 00:13:17.247 "read": true, 00:13:17.247 "write": true, 00:13:17.247 "unmap": true, 00:13:17.247 "write_zeroes": true, 00:13:17.247 "flush": true, 00:13:17.247 "reset": true, 00:13:17.247 "compare": false, 00:13:17.247 "compare_and_write": false, 00:13:17.247 "abort": true, 00:13:17.247 "nvme_admin": false, 00:13:17.247 "nvme_io": false 00:13:17.247 }, 00:13:17.247 "memory_domains": [ 00:13:17.247 { 00:13:17.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.247 "dma_device_type": 2 00:13:17.247 } 00:13:17.247 ], 00:13:17.247 "driver_specific": {} 00:13:17.247 } 00:13:17.247 ] 00:13:17.247 05:32:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.247 05:32:20 -- common/autotest_common.sh@895 -- # return 0 00:13:17.247 05:32:20 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:13:17.247 05:32:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.247 05:32:20 -- common/autotest_common.sh@10 -- # set +x 00:13:17.247 Null_1 00:13:17.247 05:32:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.247 05:32:21 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:13:17.247 05:32:21 -- common/autotest_common.sh@887 -- # local bdev_name=Null_1 00:13:17.247 05:32:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:17.247 05:32:21 -- common/autotest_common.sh@889 -- # local i 00:13:17.247 05:32:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:17.247 05:32:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:17.247 05:32:21 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:17.247 05:32:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.247 05:32:21 -- common/autotest_common.sh@10 -- # set +x 00:13:17.247 05:32:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.247 05:32:21 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:13:17.247 05:32:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.247 05:32:21 -- common/autotest_common.sh@10 -- # set +x 00:13:17.247 [ 00:13:17.247 { 00:13:17.247 "name": "Null_1", 00:13:17.247 "aliases": [ 00:13:17.247 "a929134d-1b3a-41de-9927-a750f3ad1b43" 00:13:17.247 ], 00:13:17.247 "product_name": "Null disk", 00:13:17.247 "block_size": 512, 00:13:17.247 "num_blocks": 262144, 00:13:17.247 "uuid": "a929134d-1b3a-41de-9927-a750f3ad1b43", 00:13:17.247 "assigned_rate_limits": { 00:13:17.247 "rw_ios_per_sec": 0, 00:13:17.247 "rw_mbytes_per_sec": 0, 00:13:17.247 "r_mbytes_per_sec": 0, 00:13:17.247 "w_mbytes_per_sec": 0 00:13:17.247 }, 00:13:17.247 "claimed": false, 00:13:17.247 "zoned": false, 00:13:17.247 "supported_io_types": { 00:13:17.247 "read": true, 00:13:17.247 "write": true, 00:13:17.247 "unmap": false, 00:13:17.247 "write_zeroes": true, 00:13:17.247 "flush": false, 00:13:17.247 "reset": true, 00:13:17.247 "compare": false, 00:13:17.247 "compare_and_write": false, 00:13:17.247 "abort": true, 00:13:17.247 "nvme_admin": false, 00:13:17.247 "nvme_io": false 00:13:17.247 }, 00:13:17.247 "driver_specific": {} 00:13:17.247 } 00:13:17.247 ] 00:13:17.247 05:32:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.247 05:32:21 -- common/autotest_common.sh@895 -- # return 0 00:13:17.247 05:32:21 -- bdev/blockdev.sh@455 -- # qos_function_test 00:13:17.247 05:32:21 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:13:17.247 05:32:21 -- bdev/blockdev.sh@454 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:17.247 05:32:21 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:13:17.247 05:32:21 -- bdev/blockdev.sh@410 -- # local io_result=0 00:13:17.247 05:32:21 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:13:17.247 05:32:21 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:13:17.247 05:32:21 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:13:17.247 05:32:21 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:13:17.247 05:32:21 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:17.247 05:32:21 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:17.247 05:32:21 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:17.247 05:32:21 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:17.247 05:32:21 -- bdev/blockdev.sh@376 -- # tail -1 00:13:17.247 Running I/O for 60 seconds... 00:13:22.513 05:32:26 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 85440.56 341762.25 0.00 0.00 346112.00 0.00 0.00 ' 00:13:22.513 05:32:26 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:13:22.513 05:32:26 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:13:22.513 05:32:26 -- bdev/blockdev.sh@378 -- # iostat_result=85440.56 00:13:22.513 05:32:26 -- bdev/blockdev.sh@383 -- # echo 85440 00:13:22.513 05:32:26 -- bdev/blockdev.sh@414 -- # io_result=85440 00:13:22.513 05:32:26 -- bdev/blockdev.sh@416 -- # iops_limit=21000 00:13:22.513 05:32:26 -- bdev/blockdev.sh@417 -- # '[' 21000 -gt 1000 ']' 00:13:22.513 05:32:26 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 21000 Malloc_0 00:13:22.513 05:32:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.513 05:32:26 -- common/autotest_common.sh@10 -- # set +x 00:13:22.513 05:32:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.513 05:32:26 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 21000 IOPS Malloc_0 00:13:22.513 05:32:26 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:22.513 05:32:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:22.513 05:32:26 -- common/autotest_common.sh@10 -- # set +x 00:13:22.513 ************************************ 00:13:22.513 START TEST bdev_qos_iops 00:13:22.513 ************************************ 00:13:22.513 05:32:26 -- common/autotest_common.sh@1104 -- # run_qos_test 21000 IOPS Malloc_0 00:13:22.513 05:32:26 -- bdev/blockdev.sh@387 -- # local qos_limit=21000 00:13:22.513 05:32:26 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:13:22.513 05:32:26 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:13:22.513 05:32:26 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:13:22.513 05:32:26 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:22.513 05:32:26 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:22.513 05:32:26 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:22.513 05:32:26 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:22.513 05:32:26 -- bdev/blockdev.sh@376 -- # tail -1 00:13:27.784 05:32:31 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 21020.64 84082.54 0.00 0.00 85428.00 0.00 0.00 ' 00:13:27.784 05:32:31 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:13:27.784 05:32:31 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:13:27.784 05:32:31 -- bdev/blockdev.sh@378 -- # iostat_result=21020.64 00:13:27.784 05:32:31 -- bdev/blockdev.sh@383 -- # echo 21020 00:13:27.784 05:32:31 -- bdev/blockdev.sh@390 -- # qos_result=21020 00:13:27.784 05:32:31 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:13:27.784 05:32:31 -- bdev/blockdev.sh@394 -- # lower_limit=18900 00:13:27.784 05:32:31 -- bdev/blockdev.sh@395 -- # upper_limit=23100 00:13:27.784 05:32:31 -- bdev/blockdev.sh@398 -- # '[' 21020 -lt 18900 ']' 00:13:27.784 05:32:31 -- bdev/blockdev.sh@398 -- # '[' 21020 -gt 23100 ']' 00:13:27.784 00:13:27.784 real 0m5.189s 00:13:27.784 user 0m0.121s 00:13:27.784 sys 0m0.015s 00:13:27.784 05:32:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:27.784 05:32:31 -- common/autotest_common.sh@10 -- # set +x 00:13:27.784 ************************************ 00:13:27.784 END TEST bdev_qos_iops 00:13:27.784 ************************************ 00:13:27.784 05:32:31 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:13:27.784 05:32:31 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:13:27.784 05:32:31 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:13:27.784 05:32:31 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:27.784 05:32:31 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:27.784 05:32:31 -- bdev/blockdev.sh@376 -- # tail -1 00:13:27.784 05:32:31 -- bdev/blockdev.sh@376 -- # grep Null_1 00:13:33.055 05:32:36 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 27927.65 111710.59 0.00 0.00 113664.00 0.00 0.00 ' 00:13:33.055 05:32:36 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:13:33.055 05:32:36 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:33.055 05:32:36 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:13:33.055 05:32:36 -- bdev/blockdev.sh@380 -- # iostat_result=113664.00 00:13:33.055 05:32:36 -- bdev/blockdev.sh@383 -- # echo 113664 00:13:33.055 05:32:36 -- bdev/blockdev.sh@425 -- # bw_limit=113664 00:13:33.055 05:32:36 -- bdev/blockdev.sh@426 -- # bw_limit=11 00:13:33.055 05:32:36 -- bdev/blockdev.sh@427 -- # '[' 11 -lt 2 ']' 00:13:33.055 05:32:36 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 11 Null_1 00:13:33.055 05:32:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.055 05:32:36 -- common/autotest_common.sh@10 -- # set +x 00:13:33.055 05:32:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.055 05:32:36 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 11 BANDWIDTH Null_1 00:13:33.055 05:32:36 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:33.055 05:32:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:33.055 05:32:36 -- common/autotest_common.sh@10 -- # set +x 00:13:33.055 ************************************ 00:13:33.055 START TEST bdev_qos_bw 00:13:33.055 ************************************ 00:13:33.055 05:32:36 -- common/autotest_common.sh@1104 -- # run_qos_test 11 BANDWIDTH Null_1 00:13:33.055 05:32:36 -- bdev/blockdev.sh@387 -- # local qos_limit=11 00:13:33.055 05:32:36 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:13:33.055 05:32:36 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:13:33.055 05:32:36 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:13:33.055 05:32:36 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:13:33.055 05:32:36 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:33.055 05:32:36 -- bdev/blockdev.sh@376 -- # grep Null_1 00:13:33.055 05:32:36 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:33.055 05:32:36 -- bdev/blockdev.sh@376 -- # tail -1 00:13:38.328 05:32:41 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 2814.21 11256.85 0.00 0.00 11420.00 0.00 0.00 ' 00:13:38.328 05:32:41 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:13:38.328 05:32:41 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:38.328 05:32:41 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:13:38.328 05:32:41 -- bdev/blockdev.sh@380 -- # iostat_result=11420.00 00:13:38.328 05:32:41 -- bdev/blockdev.sh@383 -- # echo 11420 00:13:38.328 05:32:41 -- bdev/blockdev.sh@390 -- # qos_result=11420 00:13:38.328 05:32:41 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:38.328 05:32:41 -- bdev/blockdev.sh@392 -- # qos_limit=11264 00:13:38.328 05:32:41 -- bdev/blockdev.sh@394 -- # lower_limit=10137 00:13:38.328 05:32:41 -- bdev/blockdev.sh@395 -- # upper_limit=12390 00:13:38.328 05:32:41 -- bdev/blockdev.sh@398 -- # '[' 11420 -lt 10137 ']' 00:13:38.328 05:32:41 -- bdev/blockdev.sh@398 -- # '[' 11420 -gt 12390 ']' 00:13:38.328 00:13:38.328 real 0m5.231s 00:13:38.328 user 0m0.118s 00:13:38.328 sys 0m0.025s 00:13:38.328 05:32:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:38.328 ************************************ 00:13:38.328 END TEST bdev_qos_bw 00:13:38.328 ************************************ 00:13:38.328 05:32:41 -- common/autotest_common.sh@10 -- # set +x 00:13:38.328 05:32:41 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:13:38.328 05:32:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.328 05:32:41 -- common/autotest_common.sh@10 -- # set +x 00:13:38.328 05:32:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.328 05:32:42 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:13:38.328 05:32:42 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:38.328 05:32:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:38.328 05:32:42 -- common/autotest_common.sh@10 -- # set +x 00:13:38.328 ************************************ 00:13:38.328 START TEST bdev_qos_ro_bw 00:13:38.328 ************************************ 00:13:38.328 05:32:42 -- common/autotest_common.sh@1104 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:13:38.328 05:32:42 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:13:38.328 05:32:42 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:13:38.328 05:32:42 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:13:38.328 05:32:42 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:13:38.328 05:32:42 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:38.328 05:32:42 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:38.328 05:32:42 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:38.328 05:32:42 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:38.328 05:32:42 -- bdev/blockdev.sh@376 -- # tail -1 00:13:43.603 05:32:47 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 511.96 2047.83 0.00 0.00 2060.00 0.00 0.00 ' 00:13:43.603 05:32:47 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:13:43.603 05:32:47 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:43.603 05:32:47 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:13:43.603 05:32:47 -- bdev/blockdev.sh@380 -- # iostat_result=2060.00 00:13:43.603 05:32:47 -- bdev/blockdev.sh@383 -- # echo 2060 00:13:43.603 05:32:47 -- bdev/blockdev.sh@390 -- # qos_result=2060 00:13:43.603 05:32:47 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:43.603 05:32:47 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:13:43.603 05:32:47 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:13:43.603 05:32:47 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:13:43.603 05:32:47 -- bdev/blockdev.sh@398 -- # '[' 2060 -lt 1843 ']' 00:13:43.603 05:32:47 -- bdev/blockdev.sh@398 -- # '[' 2060 -gt 2252 ']' 00:13:43.603 00:13:43.603 real 0m5.165s 00:13:43.603 user 0m0.115s 00:13:43.603 sys 0m0.024s 00:13:43.603 05:32:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:43.603 05:32:47 -- common/autotest_common.sh@10 -- # set +x 00:13:43.603 ************************************ 00:13:43.603 END TEST bdev_qos_ro_bw 00:13:43.603 ************************************ 00:13:43.603 05:32:47 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:13:43.603 05:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.603 05:32:47 -- common/autotest_common.sh@10 -- # set +x 00:13:43.862 05:32:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:43.862 05:32:47 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:13:43.862 05:32:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.862 05:32:47 -- common/autotest_common.sh@10 -- # set +x 00:13:44.121 00:13:44.121 Latency(us) 00:13:44.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.121 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:44.121 Malloc_0 : 26.60 28226.45 110.26 0.00 0.00 8985.79 1720.32 503316.48 00:13:44.121 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:44.121 Null_1 : 26.79 28044.31 109.55 0.00 0.00 9111.15 644.19 185883.93 00:13:44.121 =================================================================================================================== 00:13:44.121 Total : 56270.76 219.81 0.00 0.00 9048.49 644.19 503316.48 00:13:44.121 0 00:13:44.121 05:32:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.121 05:32:47 -- bdev/blockdev.sh@459 -- # killprocess 127822 00:13:44.121 05:32:47 -- common/autotest_common.sh@926 -- # '[' -z 127822 ']' 00:13:44.121 05:32:47 -- common/autotest_common.sh@930 -- # kill -0 127822 00:13:44.121 05:32:47 -- common/autotest_common.sh@931 -- # uname 00:13:44.121 05:32:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:44.121 05:32:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127822 00:13:44.121 05:32:47 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:44.121 killing process with pid 127822 00:13:44.121 05:32:47 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:44.121 05:32:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127822' 00:13:44.121 05:32:47 -- common/autotest_common.sh@945 -- # kill 127822 00:13:44.121 Received shutdown signal, test time was about 26.823570 seconds 00:13:44.121 00:13:44.121 Latency(us) 00:13:44.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.121 =================================================================================================================== 00:13:44.121 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:44.121 05:32:47 -- common/autotest_common.sh@950 -- # wait 127822 00:13:45.531 05:32:49 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:13:45.531 00:13:45.531 real 0m29.229s 00:13:45.531 user 0m29.911s 00:13:45.531 sys 0m0.672s 00:13:45.531 05:32:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:45.531 ************************************ 00:13:45.531 END TEST bdev_qos 00:13:45.531 ************************************ 00:13:45.531 05:32:49 -- common/autotest_common.sh@10 -- # set +x 00:13:45.531 05:32:49 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:13:45.531 05:32:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:45.531 05:32:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:45.531 05:32:49 -- common/autotest_common.sh@10 -- # set +x 00:13:45.531 ************************************ 00:13:45.531 START TEST bdev_qd_sampling 00:13:45.531 ************************************ 00:13:45.531 05:32:49 -- common/autotest_common.sh@1104 -- # qd_sampling_test_suite '' 00:13:45.531 05:32:49 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:13:45.531 Process bdev QD sampling period testing pid: 132998 00:13:45.531 05:32:49 -- bdev/blockdev.sh@539 -- # QD_PID=132998 00:13:45.531 05:32:49 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 132998' 00:13:45.531 05:32:49 -- bdev/blockdev.sh@538 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:13:45.531 05:32:49 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:13:45.531 05:32:49 -- bdev/blockdev.sh@542 -- # waitforlisten 132998 00:13:45.531 05:32:49 -- common/autotest_common.sh@819 -- # '[' -z 132998 ']' 00:13:45.531 05:32:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.531 05:32:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:45.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.531 05:32:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.531 05:32:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:45.531 05:32:49 -- common/autotest_common.sh@10 -- # set +x 00:13:45.531 [2024-10-07 05:32:49.200381] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:45.531 [2024-10-07 05:32:49.200595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132998 ] 00:13:45.531 [2024-10-07 05:32:49.385184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:45.790 [2024-10-07 05:32:49.675942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.790 [2024-10-07 05:32:49.675951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.356 05:32:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:46.356 05:32:50 -- common/autotest_common.sh@852 -- # return 0 00:13:46.356 05:32:50 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:13:46.357 05:32:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:46.357 05:32:50 -- common/autotest_common.sh@10 -- # set +x 00:13:46.615 Malloc_QD 00:13:46.615 05:32:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:46.615 05:32:50 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:13:46.615 05:32:50 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_QD 00:13:46.615 05:32:50 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:46.615 05:32:50 -- common/autotest_common.sh@889 -- # local i 00:13:46.615 05:32:50 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:46.615 05:32:50 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:46.615 05:32:50 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:46.615 05:32:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:46.615 05:32:50 -- common/autotest_common.sh@10 -- # set +x 00:13:46.615 05:32:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:46.615 05:32:50 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:13:46.615 05:32:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:46.615 05:32:50 -- common/autotest_common.sh@10 -- # set +x 00:13:46.615 [ 00:13:46.615 { 00:13:46.615 "name": "Malloc_QD", 00:13:46.615 "aliases": [ 00:13:46.615 "903aaaf3-2699-4fd4-bdaf-b21b107d44ed" 00:13:46.615 ], 00:13:46.615 "product_name": "Malloc disk", 00:13:46.615 "block_size": 512, 00:13:46.615 "num_blocks": 262144, 00:13:46.615 "uuid": "903aaaf3-2699-4fd4-bdaf-b21b107d44ed", 00:13:46.615 "assigned_rate_limits": { 00:13:46.615 "rw_ios_per_sec": 0, 00:13:46.615 "rw_mbytes_per_sec": 0, 00:13:46.615 "r_mbytes_per_sec": 0, 00:13:46.615 "w_mbytes_per_sec": 0 00:13:46.615 }, 00:13:46.615 "claimed": false, 00:13:46.615 "zoned": false, 00:13:46.615 "supported_io_types": { 00:13:46.615 "read": true, 00:13:46.615 "write": true, 00:13:46.615 "unmap": true, 00:13:46.615 "write_zeroes": true, 00:13:46.615 "flush": true, 00:13:46.615 "reset": true, 00:13:46.615 "compare": false, 00:13:46.615 "compare_and_write": false, 00:13:46.615 "abort": true, 00:13:46.615 "nvme_admin": false, 00:13:46.615 "nvme_io": false 00:13:46.615 }, 00:13:46.615 "memory_domains": [ 00:13:46.615 { 00:13:46.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.615 "dma_device_type": 2 00:13:46.615 } 00:13:46.615 ], 00:13:46.615 "driver_specific": {} 00:13:46.615 } 00:13:46.615 ] 00:13:46.615 05:32:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:46.615 05:32:50 -- common/autotest_common.sh@895 -- # return 0 00:13:46.615 05:32:50 -- bdev/blockdev.sh@548 -- # sleep 2 00:13:46.615 05:32:50 -- bdev/blockdev.sh@547 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:46.615 Running I/O for 5 seconds... 00:13:48.512 05:32:52 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:13:48.512 05:32:52 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:13:48.512 05:32:52 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:13:48.512 05:32:52 -- bdev/blockdev.sh@519 -- # local iostats 00:13:48.512 05:32:52 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:13:48.512 05:32:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.512 05:32:52 -- common/autotest_common.sh@10 -- # set +x 00:13:48.512 05:32:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.512 05:32:52 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:13:48.512 05:32:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.512 05:32:52 -- common/autotest_common.sh@10 -- # set +x 00:13:48.512 05:32:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.512 05:32:52 -- bdev/blockdev.sh@523 -- # iostats='{ 00:13:48.512 "tick_rate": 2200000000, 00:13:48.512 "ticks": 1640861574476, 00:13:48.512 "bdevs": [ 00:13:48.512 { 00:13:48.512 "name": "Malloc_QD", 00:13:48.512 "bytes_read": 516985344, 00:13:48.512 "num_read_ops": 126211, 00:13:48.512 "bytes_written": 0, 00:13:48.512 "num_write_ops": 0, 00:13:48.512 "bytes_unmapped": 0, 00:13:48.512 "num_unmap_ops": 0, 00:13:48.512 "bytes_copied": 0, 00:13:48.512 "num_copy_ops": 0, 00:13:48.512 "read_latency_ticks": 2149329947112, 00:13:48.512 "max_read_latency_ticks": 21959688, 00:13:48.512 "min_read_latency_ticks": 380598, 00:13:48.512 "write_latency_ticks": 0, 00:13:48.512 "max_write_latency_ticks": 0, 00:13:48.512 "min_write_latency_ticks": 0, 00:13:48.512 "unmap_latency_ticks": 0, 00:13:48.512 "max_unmap_latency_ticks": 0, 00:13:48.512 "min_unmap_latency_ticks": 0, 00:13:48.512 "copy_latency_ticks": 0, 00:13:48.512 "max_copy_latency_ticks": 0, 00:13:48.512 "min_copy_latency_ticks": 0, 00:13:48.512 "io_error": {}, 00:13:48.512 "queue_depth_polling_period": 10, 00:13:48.512 "queue_depth": 512, 00:13:48.512 "io_time": 20, 00:13:48.512 "weighted_io_time": 10240 00:13:48.512 } 00:13:48.512 ] 00:13:48.512 }' 00:13:48.512 05:32:52 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:13:48.512 05:32:52 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:13:48.512 05:32:52 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:13:48.512 05:32:52 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:13:48.512 05:32:52 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:13:48.512 05:32:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.512 05:32:52 -- common/autotest_common.sh@10 -- # set +x 00:13:48.512 00:13:48.512 Latency(us) 00:13:48.512 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.512 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:48.512 Malloc_QD : 2.00 32603.91 127.36 0.00 0.00 7825.35 2323.55 10009.13 00:13:48.512 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:48.512 Malloc_QD : 2.00 33366.89 130.34 0.00 0.00 7647.70 1839.48 9115.46 00:13:48.512 =================================================================================================================== 00:13:48.512 Total : 65970.80 257.70 0.00 0.00 7735.49 1839.48 10009.13 00:13:48.770 0 00:13:48.770 05:32:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.770 05:32:52 -- bdev/blockdev.sh@552 -- # killprocess 132998 00:13:48.770 05:32:52 -- common/autotest_common.sh@926 -- # '[' -z 132998 ']' 00:13:48.770 05:32:52 -- common/autotest_common.sh@930 -- # kill -0 132998 00:13:48.770 05:32:52 -- common/autotest_common.sh@931 -- # uname 00:13:48.770 05:32:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:48.770 05:32:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 132998 00:13:48.770 05:32:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:48.770 05:32:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:48.770 killing process with pid 132998 00:13:48.770 05:32:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 132998' 00:13:48.770 05:32:52 -- common/autotest_common.sh@945 -- # kill 132998 00:13:48.770 Received shutdown signal, test time was about 2.141660 seconds 00:13:48.770 00:13:48.770 Latency(us) 00:13:48.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.770 =================================================================================================================== 00:13:48.770 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:48.770 05:32:52 -- common/autotest_common.sh@950 -- # wait 132998 00:13:50.146 05:32:53 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:13:50.146 00:13:50.146 real 0m4.713s 00:13:50.146 user 0m8.722s 00:13:50.146 sys 0m0.424s 00:13:50.146 05:32:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:50.146 05:32:53 -- common/autotest_common.sh@10 -- # set +x 00:13:50.146 ************************************ 00:13:50.146 END TEST bdev_qd_sampling 00:13:50.146 ************************************ 00:13:50.146 05:32:53 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:13:50.146 05:32:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:50.146 05:32:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:50.146 05:32:53 -- common/autotest_common.sh@10 -- # set +x 00:13:50.146 ************************************ 00:13:50.146 START TEST bdev_error 00:13:50.146 ************************************ 00:13:50.146 05:32:53 -- common/autotest_common.sh@1104 -- # error_test_suite '' 00:13:50.146 05:32:53 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:13:50.146 05:32:53 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:13:50.146 05:32:53 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:13:50.146 05:32:53 -- bdev/blockdev.sh@470 -- # ERR_PID=133679 00:13:50.146 Process error testing pid: 133679 00:13:50.146 05:32:53 -- bdev/blockdev.sh@469 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:13:50.146 05:32:53 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 133679' 00:13:50.146 05:32:53 -- bdev/blockdev.sh@472 -- # waitforlisten 133679 00:13:50.146 05:32:53 -- common/autotest_common.sh@819 -- # '[' -z 133679 ']' 00:13:50.146 05:32:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.146 05:32:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:50.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.146 05:32:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.146 05:32:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:50.146 05:32:53 -- common/autotest_common.sh@10 -- # set +x 00:13:50.146 [2024-10-07 05:32:53.954690] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:50.146 [2024-10-07 05:32:53.954850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133679 ] 00:13:50.146 [2024-10-07 05:32:54.108293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.404 [2024-10-07 05:32:54.286408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.970 05:32:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:50.970 05:32:54 -- common/autotest_common.sh@852 -- # return 0 00:13:50.970 05:32:54 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:50.970 05:32:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.970 05:32:54 -- common/autotest_common.sh@10 -- # set +x 00:13:51.229 Dev_1 00:13:51.229 05:32:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.229 05:32:55 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:13:51.229 05:32:55 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:13:51.229 05:32:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:51.229 05:32:55 -- common/autotest_common.sh@889 -- # local i 00:13:51.229 05:32:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:51.229 05:32:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:51.229 05:32:55 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:51.229 05:32:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.229 05:32:55 -- common/autotest_common.sh@10 -- # set +x 00:13:51.229 05:32:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.229 05:32:55 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:51.229 05:32:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.229 05:32:55 -- common/autotest_common.sh@10 -- # set +x 00:13:51.229 [ 00:13:51.229 { 00:13:51.229 "name": "Dev_1", 00:13:51.229 "aliases": [ 00:13:51.229 "c76e91b2-aab2-4e58-a5bf-dcba356e670c" 00:13:51.229 ], 00:13:51.229 "product_name": "Malloc disk", 00:13:51.229 "block_size": 512, 00:13:51.229 "num_blocks": 262144, 00:13:51.229 "uuid": "c76e91b2-aab2-4e58-a5bf-dcba356e670c", 00:13:51.229 "assigned_rate_limits": { 00:13:51.229 "rw_ios_per_sec": 0, 00:13:51.229 "rw_mbytes_per_sec": 0, 00:13:51.229 "r_mbytes_per_sec": 0, 00:13:51.229 "w_mbytes_per_sec": 0 00:13:51.229 }, 00:13:51.229 "claimed": false, 00:13:51.229 "zoned": false, 00:13:51.229 "supported_io_types": { 00:13:51.229 "read": true, 00:13:51.229 "write": true, 00:13:51.229 "unmap": true, 00:13:51.229 "write_zeroes": true, 00:13:51.229 "flush": true, 00:13:51.229 "reset": true, 00:13:51.229 "compare": false, 00:13:51.229 "compare_and_write": false, 00:13:51.229 "abort": true, 00:13:51.229 "nvme_admin": false, 00:13:51.229 "nvme_io": false 00:13:51.229 }, 00:13:51.229 "memory_domains": [ 00:13:51.229 { 00:13:51.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.229 "dma_device_type": 2 00:13:51.229 } 00:13:51.229 ], 00:13:51.229 "driver_specific": {} 00:13:51.229 } 00:13:51.229 ] 00:13:51.229 05:32:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.229 05:32:55 -- common/autotest_common.sh@895 -- # return 0 00:13:51.229 05:32:55 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:13:51.229 05:32:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.229 05:32:55 -- common/autotest_common.sh@10 -- # set +x 00:13:51.229 true 00:13:51.229 05:32:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.229 05:32:55 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:51.229 05:32:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.229 05:32:55 -- common/autotest_common.sh@10 -- # set +x 00:13:51.229 Dev_2 00:13:51.229 05:32:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.229 05:32:55 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:13:51.229 05:32:55 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:13:51.229 05:32:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:51.229 05:32:55 -- common/autotest_common.sh@889 -- # local i 00:13:51.229 05:32:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:51.229 05:32:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:51.229 05:32:55 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:51.229 05:32:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.230 05:32:55 -- common/autotest_common.sh@10 -- # set +x 00:13:51.230 05:32:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.230 05:32:55 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:51.230 05:32:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.230 05:32:55 -- common/autotest_common.sh@10 -- # set +x 00:13:51.230 [ 00:13:51.230 { 00:13:51.230 "name": "Dev_2", 00:13:51.230 "aliases": [ 00:13:51.230 "a7caf7d6-f1ba-46d3-860c-433d38bd9df0" 00:13:51.230 ], 00:13:51.230 "product_name": "Malloc disk", 00:13:51.230 "block_size": 512, 00:13:51.230 "num_blocks": 262144, 00:13:51.230 "uuid": "a7caf7d6-f1ba-46d3-860c-433d38bd9df0", 00:13:51.230 "assigned_rate_limits": { 00:13:51.230 "rw_ios_per_sec": 0, 00:13:51.230 "rw_mbytes_per_sec": 0, 00:13:51.230 "r_mbytes_per_sec": 0, 00:13:51.230 "w_mbytes_per_sec": 0 00:13:51.230 }, 00:13:51.230 "claimed": false, 00:13:51.230 "zoned": false, 00:13:51.230 "supported_io_types": { 00:13:51.230 "read": true, 00:13:51.230 "write": true, 00:13:51.230 "unmap": true, 00:13:51.230 "write_zeroes": true, 00:13:51.230 "flush": true, 00:13:51.230 "reset": true, 00:13:51.230 "compare": false, 00:13:51.230 "compare_and_write": false, 00:13:51.230 "abort": true, 00:13:51.230 "nvme_admin": false, 00:13:51.230 "nvme_io": false 00:13:51.230 }, 00:13:51.230 "memory_domains": [ 00:13:51.230 { 00:13:51.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.230 "dma_device_type": 2 00:13:51.230 } 00:13:51.230 ], 00:13:51.230 "driver_specific": {} 00:13:51.230 } 00:13:51.230 ] 00:13:51.230 05:32:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.230 05:32:55 -- common/autotest_common.sh@895 -- # return 0 00:13:51.230 05:32:55 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:51.230 05:32:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.230 05:32:55 -- common/autotest_common.sh@10 -- # set +x 00:13:51.488 05:32:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.488 05:32:55 -- bdev/blockdev.sh@482 -- # sleep 1 00:13:51.488 05:32:55 -- bdev/blockdev.sh@481 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:51.488 Running I/O for 5 seconds... 00:13:52.422 05:32:56 -- bdev/blockdev.sh@485 -- # kill -0 133679 00:13:52.422 Process is existed as continue on error is set. Pid: 133679 00:13:52.422 05:32:56 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 133679' 00:13:52.422 05:32:56 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:13:52.422 05:32:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:52.423 05:32:56 -- common/autotest_common.sh@10 -- # set +x 00:13:52.423 05:32:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:52.423 05:32:56 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:13:52.423 05:32:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:52.423 05:32:56 -- common/autotest_common.sh@10 -- # set +x 00:13:52.423 Timeout while waiting for response: 00:13:52.423 00:13:52.423 00:13:52.681 05:32:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:52.681 05:32:56 -- bdev/blockdev.sh@495 -- # sleep 5 00:13:56.875 00:13:56.875 Latency(us) 00:13:56.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.875 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:56.875 EE_Dev_1 : 0.91 41997.48 164.05 5.52 0.00 378.14 140.57 1653.29 00:13:56.875 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:56.875 Dev_2 : 5.00 90046.36 351.74 0.00 0.00 175.13 54.46 287881.77 00:13:56.875 =================================================================================================================== 00:13:56.875 Total : 132043.85 515.80 5.52 0.00 190.95 54.46 287881.77 00:13:57.811 05:33:01 -- bdev/blockdev.sh@497 -- # killprocess 133679 00:13:57.811 05:33:01 -- common/autotest_common.sh@926 -- # '[' -z 133679 ']' 00:13:57.811 05:33:01 -- common/autotest_common.sh@930 -- # kill -0 133679 00:13:57.811 05:33:01 -- common/autotest_common.sh@931 -- # uname 00:13:57.811 05:33:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:57.811 05:33:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 133679 00:13:57.811 05:33:01 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:57.811 05:33:01 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:57.811 killing process with pid 133679 00:13:57.811 05:33:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 133679' 00:13:57.811 Received shutdown signal, test time was about 5.000000 seconds 00:13:57.811 00:13:57.811 Latency(us) 00:13:57.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.811 =================================================================================================================== 00:13:57.811 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:57.811 05:33:01 -- common/autotest_common.sh@945 -- # kill 133679 00:13:57.811 05:33:01 -- common/autotest_common.sh@950 -- # wait 133679 00:13:59.188 05:33:02 -- bdev/blockdev.sh@501 -- # ERR_PID=134054 00:13:59.188 Process error testing pid: 134054 00:13:59.188 05:33:02 -- bdev/blockdev.sh@500 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:13:59.188 05:33:02 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 134054' 00:13:59.188 05:33:02 -- bdev/blockdev.sh@503 -- # waitforlisten 134054 00:13:59.188 05:33:02 -- common/autotest_common.sh@819 -- # '[' -z 134054 ']' 00:13:59.188 05:33:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.188 05:33:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:59.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.188 05:33:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.188 05:33:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:59.188 05:33:02 -- common/autotest_common.sh@10 -- # set +x 00:13:59.188 [2024-10-07 05:33:02.936045] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:59.188 [2024-10-07 05:33:02.936242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134054 ] 00:13:59.188 [2024-10-07 05:33:03.102804] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.447 [2024-10-07 05:33:03.311859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.015 05:33:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:00.015 05:33:03 -- common/autotest_common.sh@852 -- # return 0 00:14:00.015 05:33:03 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:14:00.015 05:33:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:00.015 05:33:03 -- common/autotest_common.sh@10 -- # set +x 00:14:00.015 Dev_1 00:14:00.015 05:33:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:00.015 05:33:03 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:14:00.015 05:33:03 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:14:00.015 05:33:03 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:00.015 05:33:03 -- common/autotest_common.sh@889 -- # local i 00:14:00.015 05:33:03 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:00.015 05:33:03 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:00.015 05:33:03 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:00.015 05:33:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:00.015 05:33:03 -- common/autotest_common.sh@10 -- # set +x 00:14:00.015 05:33:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:00.015 05:33:03 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:14:00.015 05:33:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:00.015 05:33:03 -- common/autotest_common.sh@10 -- # set +x 00:14:00.015 [ 00:14:00.015 { 00:14:00.015 "name": "Dev_1", 00:14:00.015 "aliases": [ 00:14:00.015 "b6667ae7-5f83-4a50-ba9f-89e975cabf7d" 00:14:00.015 ], 00:14:00.015 "product_name": "Malloc disk", 00:14:00.015 "block_size": 512, 00:14:00.015 "num_blocks": 262144, 00:14:00.015 "uuid": "b6667ae7-5f83-4a50-ba9f-89e975cabf7d", 00:14:00.015 "assigned_rate_limits": { 00:14:00.015 "rw_ios_per_sec": 0, 00:14:00.015 "rw_mbytes_per_sec": 0, 00:14:00.015 "r_mbytes_per_sec": 0, 00:14:00.015 "w_mbytes_per_sec": 0 00:14:00.015 }, 00:14:00.015 "claimed": false, 00:14:00.015 "zoned": false, 00:14:00.015 "supported_io_types": { 00:14:00.015 "read": true, 00:14:00.015 "write": true, 00:14:00.015 "unmap": true, 00:14:00.015 "write_zeroes": true, 00:14:00.015 "flush": true, 00:14:00.015 "reset": true, 00:14:00.015 "compare": false, 00:14:00.015 "compare_and_write": false, 00:14:00.015 "abort": true, 00:14:00.015 "nvme_admin": false, 00:14:00.015 "nvme_io": false 00:14:00.015 }, 00:14:00.015 "memory_domains": [ 00:14:00.274 { 00:14:00.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.274 "dma_device_type": 2 00:14:00.274 } 00:14:00.274 ], 00:14:00.274 "driver_specific": {} 00:14:00.274 } 00:14:00.274 ] 00:14:00.274 05:33:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:00.274 05:33:03 -- common/autotest_common.sh@895 -- # return 0 00:14:00.274 05:33:03 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:14:00.274 05:33:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:00.274 05:33:03 -- common/autotest_common.sh@10 -- # set +x 00:14:00.274 true 00:14:00.274 05:33:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:00.274 05:33:03 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:14:00.274 05:33:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:00.274 05:33:03 -- common/autotest_common.sh@10 -- # set +x 00:14:00.274 Dev_2 00:14:00.274 05:33:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:00.274 05:33:04 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:14:00.274 05:33:04 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:14:00.274 05:33:04 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:00.274 05:33:04 -- common/autotest_common.sh@889 -- # local i 00:14:00.274 05:33:04 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:00.274 05:33:04 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:00.274 05:33:04 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:00.274 05:33:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:00.274 05:33:04 -- common/autotest_common.sh@10 -- # set +x 00:14:00.274 05:33:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:00.274 05:33:04 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:14:00.274 05:33:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:00.274 05:33:04 -- common/autotest_common.sh@10 -- # set +x 00:14:00.274 [ 00:14:00.274 { 00:14:00.274 "name": "Dev_2", 00:14:00.274 "aliases": [ 00:14:00.274 "5992fdb7-89f0-4b52-9925-906080961d9d" 00:14:00.274 ], 00:14:00.274 "product_name": "Malloc disk", 00:14:00.274 "block_size": 512, 00:14:00.274 "num_blocks": 262144, 00:14:00.274 "uuid": "5992fdb7-89f0-4b52-9925-906080961d9d", 00:14:00.274 "assigned_rate_limits": { 00:14:00.274 "rw_ios_per_sec": 0, 00:14:00.274 "rw_mbytes_per_sec": 0, 00:14:00.274 "r_mbytes_per_sec": 0, 00:14:00.274 "w_mbytes_per_sec": 0 00:14:00.274 }, 00:14:00.274 "claimed": false, 00:14:00.274 "zoned": false, 00:14:00.274 "supported_io_types": { 00:14:00.274 "read": true, 00:14:00.274 "write": true, 00:14:00.274 "unmap": true, 00:14:00.274 "write_zeroes": true, 00:14:00.274 "flush": true, 00:14:00.274 "reset": true, 00:14:00.274 "compare": false, 00:14:00.274 "compare_and_write": false, 00:14:00.274 "abort": true, 00:14:00.274 "nvme_admin": false, 00:14:00.274 "nvme_io": false 00:14:00.274 }, 00:14:00.274 "memory_domains": [ 00:14:00.274 { 00:14:00.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.274 "dma_device_type": 2 00:14:00.274 } 00:14:00.274 ], 00:14:00.274 "driver_specific": {} 00:14:00.274 } 00:14:00.274 ] 00:14:00.274 05:33:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:00.274 05:33:04 -- common/autotest_common.sh@895 -- # return 0 00:14:00.274 05:33:04 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:14:00.274 05:33:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:00.274 05:33:04 -- common/autotest_common.sh@10 -- # set +x 00:14:00.274 05:33:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:00.274 05:33:04 -- bdev/blockdev.sh@513 -- # NOT wait 134054 00:14:00.274 05:33:04 -- common/autotest_common.sh@640 -- # local es=0 00:14:00.274 05:33:04 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 134054 00:14:00.274 05:33:04 -- common/autotest_common.sh@628 -- # local arg=wait 00:14:00.274 05:33:04 -- bdev/blockdev.sh@512 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:14:00.274 05:33:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:00.274 05:33:04 -- common/autotest_common.sh@632 -- # type -t wait 00:14:00.274 05:33:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:00.274 05:33:04 -- common/autotest_common.sh@643 -- # wait 134054 00:14:00.533 Running I/O for 5 seconds... 00:14:00.533 task offset: 147984 on job bdev=EE_Dev_1 fails 00:14:00.533 00:14:00.533 Latency(us) 00:14:00.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.533 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:00.533 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:14:00.533 EE_Dev_1 : 0.00 26602.18 103.91 6045.95 0.00 386.38 167.56 677.70 00:14:00.533 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:00.533 Dev_2 : 0.00 20901.37 81.65 0.00 0.00 507.64 148.95 904.84 00:14:00.533 =================================================================================================================== 00:14:00.533 Total : 47503.55 185.56 6045.95 0.00 452.15 148.95 904.84 00:14:00.533 [2024-10-07 05:33:04.275811] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:00.533 request: 00:14:00.533 { 00:14:00.533 "method": "perform_tests", 00:14:00.533 "req_id": 1 00:14:00.533 } 00:14:00.533 Got JSON-RPC error response 00:14:00.533 response: 00:14:00.533 { 00:14:00.533 "code": -32603, 00:14:00.533 "message": "bdevperf failed with error Operation not permitted" 00:14:00.533 } 00:14:02.438 05:33:05 -- common/autotest_common.sh@643 -- # es=255 00:14:02.438 05:33:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:02.438 05:33:05 -- common/autotest_common.sh@652 -- # es=127 00:14:02.438 05:33:05 -- common/autotest_common.sh@653 -- # case "$es" in 00:14:02.438 05:33:05 -- common/autotest_common.sh@660 -- # es=1 00:14:02.438 05:33:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:02.438 00:14:02.438 real 0m12.007s 00:14:02.438 user 0m12.052s 00:14:02.438 sys 0m0.946s 00:14:02.438 05:33:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:02.438 ************************************ 00:14:02.438 05:33:05 -- common/autotest_common.sh@10 -- # set +x 00:14:02.438 END TEST bdev_error 00:14:02.438 ************************************ 00:14:02.438 05:33:05 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:14:02.438 05:33:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:02.438 05:33:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:02.438 05:33:05 -- common/autotest_common.sh@10 -- # set +x 00:14:02.438 ************************************ 00:14:02.438 START TEST bdev_stat 00:14:02.438 ************************************ 00:14:02.438 05:33:05 -- common/autotest_common.sh@1104 -- # stat_test_suite '' 00:14:02.438 05:33:05 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:14:02.438 05:33:05 -- bdev/blockdev.sh@594 -- # STAT_PID=134193 00:14:02.438 Process Bdev IO statistics testing pid: 134193 00:14:02.438 05:33:05 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 134193' 00:14:02.438 05:33:05 -- bdev/blockdev.sh@593 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:14:02.438 05:33:05 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:14:02.438 05:33:05 -- bdev/blockdev.sh@597 -- # waitforlisten 134193 00:14:02.438 05:33:05 -- common/autotest_common.sh@819 -- # '[' -z 134193 ']' 00:14:02.438 05:33:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.438 05:33:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:02.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.438 05:33:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.438 05:33:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:02.438 05:33:05 -- common/autotest_common.sh@10 -- # set +x 00:14:02.438 [2024-10-07 05:33:06.034054] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:02.438 [2024-10-07 05:33:06.034237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134193 ] 00:14:02.438 [2024-10-07 05:33:06.210089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:02.697 [2024-10-07 05:33:06.497022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.697 [2024-10-07 05:33:06.497045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.263 05:33:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:03.263 05:33:06 -- common/autotest_common.sh@852 -- # return 0 00:14:03.263 05:33:06 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:14:03.264 05:33:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:03.264 05:33:06 -- common/autotest_common.sh@10 -- # set +x 00:14:03.264 Malloc_STAT 00:14:03.264 05:33:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:03.264 05:33:07 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:14:03.264 05:33:07 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_STAT 00:14:03.264 05:33:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:03.264 05:33:07 -- common/autotest_common.sh@889 -- # local i 00:14:03.264 05:33:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:03.264 05:33:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:03.264 05:33:07 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:03.264 05:33:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:03.264 05:33:07 -- common/autotest_common.sh@10 -- # set +x 00:14:03.264 05:33:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:03.264 05:33:07 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:14:03.264 05:33:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:03.264 05:33:07 -- common/autotest_common.sh@10 -- # set +x 00:14:03.264 [ 00:14:03.264 { 00:14:03.264 "name": "Malloc_STAT", 00:14:03.264 "aliases": [ 00:14:03.264 "58cdaf6e-d6e6-4163-90bd-3a56238051ee" 00:14:03.264 ], 00:14:03.264 "product_name": "Malloc disk", 00:14:03.264 "block_size": 512, 00:14:03.264 "num_blocks": 262144, 00:14:03.264 "uuid": "58cdaf6e-d6e6-4163-90bd-3a56238051ee", 00:14:03.264 "assigned_rate_limits": { 00:14:03.264 "rw_ios_per_sec": 0, 00:14:03.264 "rw_mbytes_per_sec": 0, 00:14:03.264 "r_mbytes_per_sec": 0, 00:14:03.264 "w_mbytes_per_sec": 0 00:14:03.264 }, 00:14:03.264 "claimed": false, 00:14:03.264 "zoned": false, 00:14:03.264 "supported_io_types": { 00:14:03.264 "read": true, 00:14:03.264 "write": true, 00:14:03.264 "unmap": true, 00:14:03.264 "write_zeroes": true, 00:14:03.264 "flush": true, 00:14:03.264 "reset": true, 00:14:03.264 "compare": false, 00:14:03.264 "compare_and_write": false, 00:14:03.264 "abort": true, 00:14:03.264 "nvme_admin": false, 00:14:03.264 "nvme_io": false 00:14:03.264 }, 00:14:03.264 "memory_domains": [ 00:14:03.264 { 00:14:03.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.264 "dma_device_type": 2 00:14:03.264 } 00:14:03.264 ], 00:14:03.264 "driver_specific": {} 00:14:03.264 } 00:14:03.264 ] 00:14:03.264 05:33:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:03.264 05:33:07 -- common/autotest_common.sh@895 -- # return 0 00:14:03.264 05:33:07 -- bdev/blockdev.sh@603 -- # sleep 2 00:14:03.264 05:33:07 -- bdev/blockdev.sh@602 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:03.264 Running I/O for 10 seconds... 00:14:05.196 05:33:09 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:14:05.196 05:33:09 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:14:05.196 05:33:09 -- bdev/blockdev.sh@558 -- # local iostats 00:14:05.196 05:33:09 -- bdev/blockdev.sh@559 -- # local io_count1 00:14:05.196 05:33:09 -- bdev/blockdev.sh@560 -- # local io_count2 00:14:05.196 05:33:09 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:14:05.196 05:33:09 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:14:05.196 05:33:09 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:14:05.196 05:33:09 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:14:05.196 05:33:09 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:05.196 05:33:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:05.196 05:33:09 -- common/autotest_common.sh@10 -- # set +x 00:14:05.196 05:33:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.196 05:33:09 -- bdev/blockdev.sh@566 -- # iostats='{ 00:14:05.196 "tick_rate": 2200000000, 00:14:05.196 "ticks": 1677677792912, 00:14:05.196 "bdevs": [ 00:14:05.196 { 00:14:05.196 "name": "Malloc_STAT", 00:14:05.196 "bytes_read": 515936768, 00:14:05.196 "num_read_ops": 125955, 00:14:05.196 "bytes_written": 0, 00:14:05.197 "num_write_ops": 0, 00:14:05.197 "bytes_unmapped": 0, 00:14:05.197 "num_unmap_ops": 0, 00:14:05.197 "bytes_copied": 0, 00:14:05.197 "num_copy_ops": 0, 00:14:05.197 "read_latency_ticks": 2137004619980, 00:14:05.197 "max_read_latency_ticks": 21272224, 00:14:05.197 "min_read_latency_ticks": 323846, 00:14:05.197 "write_latency_ticks": 0, 00:14:05.197 "max_write_latency_ticks": 0, 00:14:05.197 "min_write_latency_ticks": 0, 00:14:05.197 "unmap_latency_ticks": 0, 00:14:05.197 "max_unmap_latency_ticks": 0, 00:14:05.197 "min_unmap_latency_ticks": 0, 00:14:05.197 "copy_latency_ticks": 0, 00:14:05.197 "max_copy_latency_ticks": 0, 00:14:05.197 "min_copy_latency_ticks": 0, 00:14:05.197 "io_error": {} 00:14:05.197 } 00:14:05.197 ] 00:14:05.197 }' 00:14:05.197 05:33:09 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:14:05.454 05:33:09 -- bdev/blockdev.sh@567 -- # io_count1=125955 00:14:05.454 05:33:09 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:14:05.454 05:33:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:05.454 05:33:09 -- common/autotest_common.sh@10 -- # set +x 00:14:05.454 05:33:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.454 05:33:09 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:14:05.454 "tick_rate": 2200000000, 00:14:05.454 "ticks": 1677821718382, 00:14:05.454 "name": "Malloc_STAT", 00:14:05.454 "channels": [ 00:14:05.454 { 00:14:05.454 "thread_id": 2, 00:14:05.454 "bytes_read": 266338304, 00:14:05.454 "num_read_ops": 65024, 00:14:05.454 "bytes_written": 0, 00:14:05.454 "num_write_ops": 0, 00:14:05.454 "bytes_unmapped": 0, 00:14:05.454 "num_unmap_ops": 0, 00:14:05.454 "bytes_copied": 0, 00:14:05.454 "num_copy_ops": 0, 00:14:05.454 "read_latency_ticks": 1103202450305, 00:14:05.454 "max_read_latency_ticks": 20183896, 00:14:05.454 "min_read_latency_ticks": 12358180, 00:14:05.454 "write_latency_ticks": 0, 00:14:05.454 "max_write_latency_ticks": 0, 00:14:05.454 "min_write_latency_ticks": 0, 00:14:05.454 "unmap_latency_ticks": 0, 00:14:05.454 "max_unmap_latency_ticks": 0, 00:14:05.454 "min_unmap_latency_ticks": 0, 00:14:05.454 "copy_latency_ticks": 0, 00:14:05.454 "max_copy_latency_ticks": 0, 00:14:05.454 "min_copy_latency_ticks": 0 00:14:05.454 }, 00:14:05.454 { 00:14:05.454 "thread_id": 3, 00:14:05.454 "bytes_read": 266338304, 00:14:05.454 "num_read_ops": 65024, 00:14:05.454 "bytes_written": 0, 00:14:05.454 "num_write_ops": 0, 00:14:05.454 "bytes_unmapped": 0, 00:14:05.454 "num_unmap_ops": 0, 00:14:05.454 "bytes_copied": 0, 00:14:05.454 "num_copy_ops": 0, 00:14:05.454 "read_latency_ticks": 1104927035132, 00:14:05.454 "max_read_latency_ticks": 21272224, 00:14:05.454 "min_read_latency_ticks": 12084973, 00:14:05.454 "write_latency_ticks": 0, 00:14:05.454 "max_write_latency_ticks": 0, 00:14:05.454 "min_write_latency_ticks": 0, 00:14:05.454 "unmap_latency_ticks": 0, 00:14:05.454 "max_unmap_latency_ticks": 0, 00:14:05.454 "min_unmap_latency_ticks": 0, 00:14:05.454 "copy_latency_ticks": 0, 00:14:05.454 "max_copy_latency_ticks": 0, 00:14:05.454 "min_copy_latency_ticks": 0 00:14:05.454 } 00:14:05.454 ] 00:14:05.454 }' 00:14:05.454 05:33:09 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:14:05.454 05:33:09 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=65024 00:14:05.454 05:33:09 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=65024 00:14:05.454 05:33:09 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:14:05.454 05:33:09 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=65024 00:14:05.454 05:33:09 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=130048 00:14:05.454 05:33:09 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:05.454 05:33:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:05.454 05:33:09 -- common/autotest_common.sh@10 -- # set +x 00:14:05.454 05:33:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.454 05:33:09 -- bdev/blockdev.sh@575 -- # iostats='{ 00:14:05.454 "tick_rate": 2200000000, 00:14:05.454 "ticks": 1678085617656, 00:14:05.454 "bdevs": [ 00:14:05.454 { 00:14:05.454 "name": "Malloc_STAT", 00:14:05.454 "bytes_read": 565219840, 00:14:05.454 "num_read_ops": 137987, 00:14:05.454 "bytes_written": 0, 00:14:05.454 "num_write_ops": 0, 00:14:05.454 "bytes_unmapped": 0, 00:14:05.454 "num_unmap_ops": 0, 00:14:05.454 "bytes_copied": 0, 00:14:05.454 "num_copy_ops": 0, 00:14:05.454 "read_latency_ticks": 2345987628089, 00:14:05.454 "max_read_latency_ticks": 21272224, 00:14:05.454 "min_read_latency_ticks": 323846, 00:14:05.454 "write_latency_ticks": 0, 00:14:05.454 "max_write_latency_ticks": 0, 00:14:05.454 "min_write_latency_ticks": 0, 00:14:05.454 "unmap_latency_ticks": 0, 00:14:05.454 "max_unmap_latency_ticks": 0, 00:14:05.454 "min_unmap_latency_ticks": 0, 00:14:05.454 "copy_latency_ticks": 0, 00:14:05.454 "max_copy_latency_ticks": 0, 00:14:05.454 "min_copy_latency_ticks": 0, 00:14:05.454 "io_error": {} 00:14:05.454 } 00:14:05.454 ] 00:14:05.455 }' 00:14:05.455 05:33:09 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:14:05.455 05:33:09 -- bdev/blockdev.sh@576 -- # io_count2=137987 00:14:05.455 05:33:09 -- bdev/blockdev.sh@581 -- # '[' 130048 -lt 125955 ']' 00:14:05.455 05:33:09 -- bdev/blockdev.sh@581 -- # '[' 130048 -gt 137987 ']' 00:14:05.455 05:33:09 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:14:05.455 05:33:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:05.455 05:33:09 -- common/autotest_common.sh@10 -- # set +x 00:14:05.455 00:14:05.455 Latency(us) 00:14:05.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.455 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:14:05.455 Malloc_STAT : 2.17 32815.91 128.19 0.00 0.00 7778.31 2040.55 12809.31 00:14:05.455 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:05.455 Malloc_STAT : 2.17 33274.72 129.98 0.00 0.00 7672.71 644.19 9711.24 00:14:05.455 =================================================================================================================== 00:14:05.455 Total : 66090.63 258.17 0.00 0.00 7725.13 644.19 12809.31 00:14:05.713 0 00:14:05.713 05:33:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.713 05:33:09 -- bdev/blockdev.sh@607 -- # killprocess 134193 00:14:05.713 05:33:09 -- common/autotest_common.sh@926 -- # '[' -z 134193 ']' 00:14:05.713 05:33:09 -- common/autotest_common.sh@930 -- # kill -0 134193 00:14:05.713 05:33:09 -- common/autotest_common.sh@931 -- # uname 00:14:05.713 05:33:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:05.713 05:33:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 134193 00:14:05.713 05:33:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:05.713 05:33:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:05.713 killing process with pid 134193 00:14:05.713 05:33:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 134193' 00:14:05.713 05:33:09 -- common/autotest_common.sh@945 -- # kill 134193 00:14:05.713 Received shutdown signal, test time was about 2.348093 seconds 00:14:05.713 00:14:05.713 Latency(us) 00:14:05.713 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.713 =================================================================================================================== 00:14:05.713 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:05.713 05:33:09 -- common/autotest_common.sh@950 -- # wait 134193 00:14:07.085 05:33:10 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:14:07.085 00:14:07.085 real 0m4.814s 00:14:07.085 user 0m9.010s 00:14:07.085 sys 0m0.449s 00:14:07.085 ************************************ 00:14:07.085 END TEST bdev_stat 00:14:07.085 ************************************ 00:14:07.085 05:33:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:07.085 05:33:10 -- common/autotest_common.sh@10 -- # set +x 00:14:07.085 05:33:10 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:14:07.085 05:33:10 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:14:07.085 05:33:10 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:14:07.085 05:33:10 -- bdev/blockdev.sh@809 -- # cleanup 00:14:07.085 05:33:10 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:07.085 05:33:10 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:07.085 05:33:10 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:14:07.085 05:33:10 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:14:07.085 05:33:10 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:14:07.085 05:33:10 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:14:07.085 00:14:07.085 real 2m21.573s 00:14:07.085 user 5m39.168s 00:14:07.085 sys 0m21.741s 00:14:07.085 05:33:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:07.085 05:33:10 -- common/autotest_common.sh@10 -- # set +x 00:14:07.085 ************************************ 00:14:07.085 END TEST blockdev_general 00:14:07.085 ************************************ 00:14:07.085 05:33:10 -- spdk/autotest.sh@196 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:07.085 05:33:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:07.085 05:33:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:07.085 05:33:10 -- common/autotest_common.sh@10 -- # set +x 00:14:07.085 ************************************ 00:14:07.085 START TEST bdev_raid 00:14:07.085 ************************************ 00:14:07.085 05:33:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:07.085 * Looking for test storage... 00:14:07.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:07.085 05:33:10 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:07.085 05:33:10 -- bdev/nbd_common.sh@6 -- # set -e 00:14:07.085 05:33:10 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:14:07.085 05:33:10 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:14:07.085 05:33:10 -- bdev/bdev_raid.sh@716 -- # uname -s 00:14:07.085 05:33:10 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:14:07.085 05:33:10 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:14:07.085 05:33:10 -- bdev/bdev_raid.sh@717 -- # has_nbd=true 00:14:07.085 05:33:10 -- bdev/bdev_raid.sh@718 -- # modprobe nbd 00:14:07.085 05:33:10 -- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:14:07.085 05:33:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:07.085 05:33:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:07.085 05:33:10 -- common/autotest_common.sh@10 -- # set +x 00:14:07.085 ************************************ 00:14:07.085 START TEST raid_function_test_raid0 00:14:07.085 ************************************ 00:14:07.085 05:33:10 -- common/autotest_common.sh@1104 -- # raid_function_test raid0 00:14:07.085 05:33:10 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:14:07.085 05:33:10 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:14:07.085 05:33:10 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:14:07.085 05:33:10 -- bdev/bdev_raid.sh@86 -- # raid_pid=134500 00:14:07.085 05:33:10 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:07.085 Process raid pid: 134500 00:14:07.085 05:33:10 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 134500' 00:14:07.085 05:33:10 -- bdev/bdev_raid.sh@88 -- # waitforlisten 134500 /var/tmp/spdk-raid.sock 00:14:07.085 05:33:10 -- common/autotest_common.sh@819 -- # '[' -z 134500 ']' 00:14:07.085 05:33:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:07.085 05:33:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:07.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:07.085 05:33:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:07.085 05:33:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:07.085 05:33:10 -- common/autotest_common.sh@10 -- # set +x 00:14:07.343 [2024-10-07 05:33:11.064107] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:07.343 [2024-10-07 05:33:11.064337] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.343 [2024-10-07 05:33:11.231674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.601 [2024-10-07 05:33:11.447742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.859 [2024-10-07 05:33:11.646775] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:08.117 05:33:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:08.117 05:33:12 -- common/autotest_common.sh@852 -- # return 0 00:14:08.117 05:33:12 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:14:08.117 05:33:12 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:14:08.117 05:33:12 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:08.117 05:33:12 -- bdev/bdev_raid.sh@70 -- # cat 00:14:08.117 05:33:12 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:08.684 [2024-10-07 05:33:12.387730] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:08.685 [2024-10-07 05:33:12.389573] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:08.685 [2024-10-07 05:33:12.389651] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:14:08.685 [2024-10-07 05:33:12.389665] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:08.685 [2024-10-07 05:33:12.389826] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:14:08.685 [2024-10-07 05:33:12.390149] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:14:08.685 [2024-10-07 05:33:12.390169] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006f80 00:14:08.685 [2024-10-07 05:33:12.390306] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.685 Base_1 00:14:08.685 Base_2 00:14:08.685 05:33:12 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:08.685 05:33:12 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:08.685 05:33:12 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:14:08.685 05:33:12 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:14:08.685 05:33:12 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:14:08.685 05:33:12 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:08.685 05:33:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:08.685 05:33:12 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:14:08.685 05:33:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:08.685 05:33:12 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:08.685 05:33:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:08.685 05:33:12 -- bdev/nbd_common.sh@12 -- # local i 00:14:08.685 05:33:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:08.685 05:33:12 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:08.685 05:33:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:08.943 [2024-10-07 05:33:12.863816] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:14:08.943 /dev/nbd0 00:14:08.943 05:33:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:09.201 05:33:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:09.202 05:33:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:09.202 05:33:12 -- common/autotest_common.sh@857 -- # local i 00:14:09.202 05:33:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:09.202 05:33:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:09.202 05:33:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:09.202 05:33:12 -- common/autotest_common.sh@861 -- # break 00:14:09.202 05:33:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:09.202 05:33:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:09.202 05:33:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:09.202 1+0 records in 00:14:09.202 1+0 records out 00:14:09.202 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252916 s, 16.2 MB/s 00:14:09.202 05:33:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.202 05:33:12 -- common/autotest_common.sh@874 -- # size=4096 00:14:09.202 05:33:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.202 05:33:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:09.202 05:33:12 -- common/autotest_common.sh@877 -- # return 0 00:14:09.202 05:33:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:09.202 05:33:12 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:09.202 05:33:12 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:09.202 05:33:12 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:09.202 05:33:12 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:09.202 05:33:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:09.202 { 00:14:09.202 "nbd_device": "/dev/nbd0", 00:14:09.202 "bdev_name": "raid" 00:14:09.202 } 00:14:09.202 ]' 00:14:09.202 05:33:13 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:09.202 { 00:14:09.202 "nbd_device": "/dev/nbd0", 00:14:09.202 "bdev_name": "raid" 00:14:09.202 } 00:14:09.202 ]' 00:14:09.202 05:33:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:09.202 05:33:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:09.202 05:33:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:09.202 05:33:13 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:09.202 05:33:13 -- bdev/nbd_common.sh@65 -- # count=1 00:14:09.202 05:33:13 -- bdev/nbd_common.sh@66 -- # echo 1 00:14:09.202 05:33:13 -- bdev/bdev_raid.sh@98 -- # count=1 00:14:09.202 05:33:13 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:14:09.202 05:33:13 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:09.202 05:33:13 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:14:09.202 05:33:13 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:14:09.202 05:33:13 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:09.202 05:33:13 -- bdev/bdev_raid.sh@20 -- # local blksize 00:14:09.461 05:33:13 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:09.461 05:33:13 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:14:09.461 05:33:13 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:14:09.461 05:33:13 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:14:09.461 05:33:13 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:14:09.461 05:33:13 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:14:09.461 05:33:13 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:14:09.461 05:33:13 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:14:09.461 05:33:13 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:14:09.461 05:33:13 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:14:09.461 05:33:13 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:14:09.461 05:33:13 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:14:09.461 05:33:13 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:14:09.461 4096+0 records in 00:14:09.461 4096+0 records out 00:14:09.461 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0168117 s, 125 MB/s 00:14:09.461 05:33:13 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:09.461 4096+0 records in 00:14:09.461 4096+0 records out 00:14:09.461 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.21991 s, 9.5 MB/s 00:14:09.461 05:33:13 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:14:09.721 05:33:13 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:09.721 05:33:13 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:14:09.721 05:33:13 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:09.721 05:33:13 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:14:09.721 05:33:13 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:14:09.721 05:33:13 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:09.721 128+0 records in 00:14:09.721 128+0 records out 00:14:09.721 65536 bytes (66 kB, 64 KiB) copied, 0.000369035 s, 178 MB/s 00:14:09.721 05:33:13 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:09.721 05:33:13 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:09.721 05:33:13 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:09.721 05:33:13 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:09.721 05:33:13 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:09.721 05:33:13 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:14:09.721 05:33:13 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:14:09.721 05:33:13 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:09.721 2035+0 records in 00:14:09.721 2035+0 records out 00:14:09.721 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00616359 s, 169 MB/s 00:14:09.721 05:33:13 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:09.721 05:33:13 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:09.721 05:33:13 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:09.721 05:33:13 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:09.721 05:33:13 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:09.721 05:33:13 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:14:09.721 05:33:13 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:14:09.721 05:33:13 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:09.721 456+0 records in 00:14:09.721 456+0 records out 00:14:09.721 233472 bytes (233 kB, 228 KiB) copied, 0.0011225 s, 208 MB/s 00:14:09.721 05:33:13 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:09.721 05:33:13 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:09.721 05:33:13 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:09.721 05:33:13 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:09.721 05:33:13 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:09.721 05:33:13 -- bdev/bdev_raid.sh@53 -- # return 0 00:14:09.721 05:33:13 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:09.721 05:33:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:09.721 05:33:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:09.721 05:33:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:09.721 05:33:13 -- bdev/nbd_common.sh@51 -- # local i 00:14:09.721 05:33:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:09.721 05:33:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:09.979 05:33:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:09.979 05:33:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:09.979 05:33:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:09.979 05:33:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.980 05:33:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.980 05:33:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:09.980 [2024-10-07 05:33:13.704718] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.980 05:33:13 -- bdev/nbd_common.sh@41 -- # break 00:14:09.980 05:33:13 -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.980 05:33:13 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:09.980 05:33:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:09.980 05:33:13 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:10.238 05:33:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:10.238 05:33:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:10.238 05:33:13 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:10.238 05:33:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:10.238 05:33:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:10.238 05:33:14 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:10.238 05:33:14 -- bdev/nbd_common.sh@65 -- # true 00:14:10.238 05:33:14 -- bdev/nbd_common.sh@65 -- # count=0 00:14:10.238 05:33:14 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:10.238 05:33:14 -- bdev/bdev_raid.sh@106 -- # count=0 00:14:10.238 05:33:14 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:14:10.238 05:33:14 -- bdev/bdev_raid.sh@111 -- # killprocess 134500 00:14:10.238 05:33:14 -- common/autotest_common.sh@926 -- # '[' -z 134500 ']' 00:14:10.238 05:33:14 -- common/autotest_common.sh@930 -- # kill -0 134500 00:14:10.238 05:33:14 -- common/autotest_common.sh@931 -- # uname 00:14:10.238 05:33:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:10.238 05:33:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 134500 00:14:10.238 05:33:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:10.238 05:33:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:10.238 05:33:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 134500' 00:14:10.238 killing process with pid 134500 00:14:10.238 05:33:14 -- common/autotest_common.sh@945 -- # kill 134500 00:14:10.238 05:33:14 -- common/autotest_common.sh@950 -- # wait 134500 00:14:10.239 [2024-10-07 05:33:14.053800] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:10.239 [2024-10-07 05:33:14.054048] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:10.239 [2024-10-07 05:33:14.054200] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:10.239 [2024-10-07 05:33:14.054288] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name raid, state offline 00:14:10.239 [2024-10-07 05:33:14.198187] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:11.616 05:33:15 -- bdev/bdev_raid.sh@113 -- # return 0 00:14:11.616 00:14:11.616 real 0m4.237s 00:14:11.616 user 0m5.342s 00:14:11.616 sys 0m0.909s 00:14:11.616 05:33:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:11.616 05:33:15 -- common/autotest_common.sh@10 -- # set +x 00:14:11.616 ************************************ 00:14:11.616 END TEST raid_function_test_raid0 00:14:11.616 ************************************ 00:14:11.616 05:33:15 -- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat 00:14:11.616 05:33:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:11.616 05:33:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:11.616 05:33:15 -- common/autotest_common.sh@10 -- # set +x 00:14:11.616 ************************************ 00:14:11.616 START TEST raid_function_test_concat 00:14:11.616 ************************************ 00:14:11.616 05:33:15 -- common/autotest_common.sh@1104 -- # raid_function_test concat 00:14:11.616 05:33:15 -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:14:11.616 05:33:15 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:14:11.616 05:33:15 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:14:11.616 05:33:15 -- bdev/bdev_raid.sh@86 -- # raid_pid=134749 00:14:11.616 Process raid pid: 134749 00:14:11.616 05:33:15 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 134749' 00:14:11.616 05:33:15 -- bdev/bdev_raid.sh@88 -- # waitforlisten 134749 /var/tmp/spdk-raid.sock 00:14:11.616 05:33:15 -- common/autotest_common.sh@819 -- # '[' -z 134749 ']' 00:14:11.616 05:33:15 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:11.616 05:33:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:11.616 05:33:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:11.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:11.616 05:33:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:11.616 05:33:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:11.616 05:33:15 -- common/autotest_common.sh@10 -- # set +x 00:14:11.616 [2024-10-07 05:33:15.343873] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:11.616 [2024-10-07 05:33:15.344057] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.616 [2024-10-07 05:33:15.508412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.875 [2024-10-07 05:33:15.695256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.134 [2024-10-07 05:33:15.893025] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.393 05:33:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:12.393 05:33:16 -- common/autotest_common.sh@852 -- # return 0 00:14:12.393 05:33:16 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:14:12.393 05:33:16 -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:14:12.393 05:33:16 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:12.393 05:33:16 -- bdev/bdev_raid.sh@70 -- # cat 00:14:12.393 05:33:16 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:12.652 [2024-10-07 05:33:16.612326] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:12.652 [2024-10-07 05:33:16.614783] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:12.652 [2024-10-07 05:33:16.614973] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:14:12.652 [2024-10-07 05:33:16.615090] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:12.652 [2024-10-07 05:33:16.615290] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:14:12.652 [2024-10-07 05:33:16.615786] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:14:12.652 [2024-10-07 05:33:16.615918] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006f80 00:14:12.652 [2024-10-07 05:33:16.616259] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.652 Base_1 00:14:12.652 Base_2 00:14:12.911 05:33:16 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:12.911 05:33:16 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:12.911 05:33:16 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:14:12.911 05:33:16 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:14:12.911 05:33:16 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:14:12.911 05:33:16 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:12.911 05:33:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:12.911 05:33:16 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:14:12.911 05:33:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:12.911 05:33:16 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:12.911 05:33:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:12.911 05:33:16 -- bdev/nbd_common.sh@12 -- # local i 00:14:12.911 05:33:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:12.911 05:33:16 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:12.911 05:33:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:13.169 [2024-10-07 05:33:17.128508] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:14:13.428 /dev/nbd0 00:14:13.428 05:33:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:13.428 05:33:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:13.428 05:33:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:13.428 05:33:17 -- common/autotest_common.sh@857 -- # local i 00:14:13.428 05:33:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:13.428 05:33:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:13.428 05:33:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:13.428 05:33:17 -- common/autotest_common.sh@861 -- # break 00:14:13.428 05:33:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:13.428 05:33:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:13.428 05:33:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:13.428 1+0 records in 00:14:13.428 1+0 records out 00:14:13.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325134 s, 12.6 MB/s 00:14:13.428 05:33:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.428 05:33:17 -- common/autotest_common.sh@874 -- # size=4096 00:14:13.428 05:33:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.428 05:33:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:13.428 05:33:17 -- common/autotest_common.sh@877 -- # return 0 00:14:13.428 05:33:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:13.428 05:33:17 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:13.428 05:33:17 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:13.429 05:33:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:13.429 05:33:17 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:13.429 05:33:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:13.429 { 00:14:13.429 "nbd_device": "/dev/nbd0", 00:14:13.429 "bdev_name": "raid" 00:14:13.429 } 00:14:13.429 ]' 00:14:13.429 05:33:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:13.429 05:33:17 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:13.429 { 00:14:13.429 "nbd_device": "/dev/nbd0", 00:14:13.429 "bdev_name": "raid" 00:14:13.429 } 00:14:13.429 ]' 00:14:13.687 05:33:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:13.687 05:33:17 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:13.687 05:33:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:13.687 05:33:17 -- bdev/nbd_common.sh@65 -- # count=1 00:14:13.687 05:33:17 -- bdev/nbd_common.sh@66 -- # echo 1 00:14:13.687 05:33:17 -- bdev/bdev_raid.sh@98 -- # count=1 00:14:13.687 05:33:17 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:14:13.687 05:33:17 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:13.687 05:33:17 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:14:13.687 05:33:17 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:14:13.687 05:33:17 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:13.687 05:33:17 -- bdev/bdev_raid.sh@20 -- # local blksize 00:14:13.687 05:33:17 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:13.687 05:33:17 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:14:13.687 05:33:17 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:14:13.687 05:33:17 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:14:13.687 05:33:17 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:14:13.687 05:33:17 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:14:13.687 05:33:17 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:14:13.687 05:33:17 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:14:13.687 05:33:17 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:14:13.687 05:33:17 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:14:13.687 05:33:17 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:14:13.687 05:33:17 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:14:13.687 05:33:17 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:14:13.687 4096+0 records in 00:14:13.687 4096+0 records out 00:14:13.687 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0292412 s, 71.7 MB/s 00:14:13.687 05:33:17 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:13.946 4096+0 records in 00:14:13.946 4096+0 records out 00:14:13.946 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.254189 s, 8.3 MB/s 00:14:13.946 05:33:17 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:14:13.946 05:33:17 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:13.946 05:33:17 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:14:13.946 05:33:17 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:13.946 05:33:17 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:14:13.946 05:33:17 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:14:13.946 05:33:17 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:13.947 128+0 records in 00:14:13.947 128+0 records out 00:14:13.947 65536 bytes (66 kB, 64 KiB) copied, 0.000375106 s, 175 MB/s 00:14:13.947 05:33:17 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:13.947 05:33:17 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:13.947 05:33:17 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:13.947 05:33:17 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:13.947 05:33:17 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:13.947 05:33:17 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:14:13.947 05:33:17 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:14:13.947 05:33:17 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:13.947 2035+0 records in 00:14:13.947 2035+0 records out 00:14:13.947 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00657081 s, 159 MB/s 00:14:13.947 05:33:17 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:13.947 05:33:17 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:13.947 05:33:17 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:13.947 05:33:17 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:13.947 05:33:17 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:13.947 05:33:17 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:14:13.947 05:33:17 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:14:13.947 05:33:17 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:13.947 456+0 records in 00:14:13.947 456+0 records out 00:14:13.947 233472 bytes (233 kB, 228 KiB) copied, 0.00179946 s, 130 MB/s 00:14:13.947 05:33:17 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:13.947 05:33:17 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:13.947 05:33:17 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:13.947 05:33:17 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:13.947 05:33:17 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:13.947 05:33:17 -- bdev/bdev_raid.sh@53 -- # return 0 00:14:13.947 05:33:17 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:13.947 05:33:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:13.947 05:33:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:13.947 05:33:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:13.947 05:33:17 -- bdev/nbd_common.sh@51 -- # local i 00:14:13.947 05:33:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:13.947 05:33:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:14.205 05:33:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:14.205 [2024-10-07 05:33:18.085244] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.205 05:33:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:14.205 05:33:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:14.205 05:33:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:14.205 05:33:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:14.205 05:33:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:14.205 05:33:18 -- bdev/nbd_common.sh@41 -- # break 00:14:14.205 05:33:18 -- bdev/nbd_common.sh@45 -- # return 0 00:14:14.205 05:33:18 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:14.205 05:33:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:14.205 05:33:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:14.463 05:33:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:14.463 05:33:18 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:14.463 05:33:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:14.463 05:33:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:14.463 05:33:18 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:14.463 05:33:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:14.463 05:33:18 -- bdev/nbd_common.sh@65 -- # true 00:14:14.463 05:33:18 -- bdev/nbd_common.sh@65 -- # count=0 00:14:14.463 05:33:18 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:14.463 05:33:18 -- bdev/bdev_raid.sh@106 -- # count=0 00:14:14.463 05:33:18 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:14:14.463 05:33:18 -- bdev/bdev_raid.sh@111 -- # killprocess 134749 00:14:14.463 05:33:18 -- common/autotest_common.sh@926 -- # '[' -z 134749 ']' 00:14:14.463 05:33:18 -- common/autotest_common.sh@930 -- # kill -0 134749 00:14:14.463 05:33:18 -- common/autotest_common.sh@931 -- # uname 00:14:14.463 05:33:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:14.463 05:33:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 134749 00:14:14.463 05:33:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:14.463 05:33:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:14.463 05:33:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 134749' 00:14:14.463 killing process with pid 134749 00:14:14.463 05:33:18 -- common/autotest_common.sh@945 -- # kill 134749 00:14:14.463 05:33:18 -- common/autotest_common.sh@950 -- # wait 134749 00:14:14.463 [2024-10-07 05:33:18.411645] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:14.463 [2024-10-07 05:33:18.411862] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:14.463 [2024-10-07 05:33:18.412044] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:14.463 [2024-10-07 05:33:18.412160] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name raid, state offline 00:14:14.721 [2024-10-07 05:33:18.547487] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:15.658 05:33:19 -- bdev/bdev_raid.sh@113 -- # return 0 00:14:15.658 00:14:15.658 real 0m4.272s 00:14:15.658 user 0m5.396s 00:14:15.658 sys 0m0.955s 00:14:15.658 05:33:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:15.658 05:33:19 -- common/autotest_common.sh@10 -- # set +x 00:14:15.658 ************************************ 00:14:15.658 END TEST raid_function_test_concat 00:14:15.658 ************************************ 00:14:15.658 05:33:19 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:14:15.658 05:33:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:15.658 05:33:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:15.658 05:33:19 -- common/autotest_common.sh@10 -- # set +x 00:14:15.658 ************************************ 00:14:15.658 START TEST raid0_resize_test 00:14:15.658 ************************************ 00:14:15.658 05:33:19 -- common/autotest_common.sh@1104 -- # raid0_resize_test 00:14:15.658 05:33:19 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:14:15.658 05:33:19 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:14:15.658 05:33:19 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:14:15.658 05:33:19 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:14:15.658 05:33:19 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:14:15.658 05:33:19 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:14:15.658 05:33:19 -- bdev/bdev_raid.sh@301 -- # raid_pid=135057 00:14:15.658 Process raid pid: 135057 00:14:15.658 05:33:19 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 135057' 00:14:15.658 05:33:19 -- bdev/bdev_raid.sh@303 -- # waitforlisten 135057 /var/tmp/spdk-raid.sock 00:14:15.658 05:33:19 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:15.658 05:33:19 -- common/autotest_common.sh@819 -- # '[' -z 135057 ']' 00:14:15.658 05:33:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:15.658 05:33:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:15.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:15.658 05:33:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:15.658 05:33:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:15.658 05:33:19 -- common/autotest_common.sh@10 -- # set +x 00:14:15.918 [2024-10-07 05:33:19.674185] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:15.918 [2024-10-07 05:33:19.674411] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.918 [2024-10-07 05:33:19.844027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.178 [2024-10-07 05:33:20.039131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.436 [2024-10-07 05:33:20.231046] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:16.694 05:33:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:16.694 05:33:20 -- common/autotest_common.sh@852 -- # return 0 00:14:16.694 05:33:20 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:14:16.952 Base_1 00:14:16.952 05:33:20 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:14:17.210 Base_2 00:14:17.210 05:33:20 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:14:17.469 [2024-10-07 05:33:21.204551] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:17.469 [2024-10-07 05:33:21.206666] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:17.469 [2024-10-07 05:33:21.206854] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:14:17.469 [2024-10-07 05:33:21.207012] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:17.469 [2024-10-07 05:33:21.207178] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005450 00:14:17.469 [2024-10-07 05:33:21.207516] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:14:17.469 [2024-10-07 05:33:21.207642] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000006f80 00:14:17.469 [2024-10-07 05:33:21.207881] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.469 05:33:21 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:14:17.469 [2024-10-07 05:33:21.396562] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:17.469 [2024-10-07 05:33:21.396692] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:14:17.469 true 00:14:17.469 05:33:21 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:17.469 05:33:21 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:14:17.728 [2024-10-07 05:33:21.668715] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:17.728 05:33:21 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:14:17.728 05:33:21 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:14:17.728 05:33:21 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:14:17.728 05:33:21 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:14:17.986 [2024-10-07 05:33:21.856609] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:17.986 [2024-10-07 05:33:21.856760] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:14:17.986 [2024-10-07 05:33:21.856892] raid0.c: 402:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:14:17.986 [2024-10-07 05:33:21.856990] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:17.986 true 00:14:17.986 05:33:21 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:17.986 05:33:21 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:14:18.245 [2024-10-07 05:33:22.052765] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:18.245 05:33:22 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:14:18.245 05:33:22 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:14:18.245 05:33:22 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:14:18.245 05:33:22 -- bdev/bdev_raid.sh@332 -- # killprocess 135057 00:14:18.245 05:33:22 -- common/autotest_common.sh@926 -- # '[' -z 135057 ']' 00:14:18.245 05:33:22 -- common/autotest_common.sh@930 -- # kill -0 135057 00:14:18.245 05:33:22 -- common/autotest_common.sh@931 -- # uname 00:14:18.245 05:33:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:18.245 05:33:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 135057 00:14:18.245 05:33:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:18.245 05:33:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:18.246 05:33:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 135057' 00:14:18.246 killing process with pid 135057 00:14:18.246 05:33:22 -- common/autotest_common.sh@945 -- # kill 135057 00:14:18.246 [2024-10-07 05:33:22.096397] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:18.246 05:33:22 -- common/autotest_common.sh@950 -- # wait 135057 00:14:18.246 [2024-10-07 05:33:22.096578] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:18.246 [2024-10-07 05:33:22.096735] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:18.246 [2024-10-07 05:33:22.096851] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Raid, state offline 00:14:18.246 [2024-10-07 05:33:22.097398] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:19.208 05:33:23 -- bdev/bdev_raid.sh@334 -- # return 0 00:14:19.208 00:14:19.208 real 0m3.518s 00:14:19.208 user 0m4.893s 00:14:19.208 sys 0m0.562s 00:14:19.208 05:33:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:19.208 05:33:23 -- common/autotest_common.sh@10 -- # set +x 00:14:19.208 ************************************ 00:14:19.208 END TEST raid0_resize_test 00:14:19.208 ************************************ 00:14:19.208 05:33:23 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:14:19.208 05:33:23 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:19.208 05:33:23 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:14:19.208 05:33:23 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:19.208 05:33:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:19.208 05:33:23 -- common/autotest_common.sh@10 -- # set +x 00:14:19.208 ************************************ 00:14:19.208 START TEST raid_state_function_test 00:14:19.208 ************************************ 00:14:19.208 05:33:23 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 false 00:14:19.208 05:33:23 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:14:19.208 05:33:23 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:19.208 05:33:23 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:19.208 05:33:23 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:19.208 05:33:23 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:19.208 05:33:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:19.208 05:33:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:19.208 05:33:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:19.467 05:33:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:19.467 05:33:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:19.467 05:33:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:19.467 05:33:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:19.467 05:33:23 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:19.467 05:33:23 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:19.467 05:33:23 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:19.467 05:33:23 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:19.467 05:33:23 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:19.467 05:33:23 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:19.467 05:33:23 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:14:19.467 05:33:23 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:19.467 05:33:23 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:19.467 05:33:23 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:19.467 05:33:23 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:19.467 05:33:23 -- bdev/bdev_raid.sh@226 -- # raid_pid=135230 00:14:19.467 05:33:23 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 135230' 00:14:19.467 Process raid pid: 135230 00:14:19.467 05:33:23 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:19.467 05:33:23 -- bdev/bdev_raid.sh@228 -- # waitforlisten 135230 /var/tmp/spdk-raid.sock 00:14:19.467 05:33:23 -- common/autotest_common.sh@819 -- # '[' -z 135230 ']' 00:14:19.467 05:33:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:19.467 05:33:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:19.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:19.467 05:33:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:19.467 05:33:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:19.467 05:33:23 -- common/autotest_common.sh@10 -- # set +x 00:14:19.467 [2024-10-07 05:33:23.258967] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:19.467 [2024-10-07 05:33:23.259214] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.467 [2024-10-07 05:33:23.431022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.726 [2024-10-07 05:33:23.642463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.986 [2024-10-07 05:33:23.847129] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:20.244 05:33:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:20.244 05:33:24 -- common/autotest_common.sh@852 -- # return 0 00:14:20.244 05:33:24 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:20.503 [2024-10-07 05:33:24.377922] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:20.503 [2024-10-07 05:33:24.378150] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:20.503 [2024-10-07 05:33:24.378271] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:20.503 [2024-10-07 05:33:24.378339] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:20.503 05:33:24 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:20.503 05:33:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:20.503 05:33:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:20.503 05:33:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:20.503 05:33:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:20.503 05:33:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:20.503 05:33:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:20.503 05:33:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:20.503 05:33:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:20.503 05:33:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:20.503 05:33:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.503 05:33:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.761 05:33:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:20.761 "name": "Existed_Raid", 00:14:20.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.761 "strip_size_kb": 64, 00:14:20.761 "state": "configuring", 00:14:20.761 "raid_level": "raid0", 00:14:20.761 "superblock": false, 00:14:20.761 "num_base_bdevs": 2, 00:14:20.761 "num_base_bdevs_discovered": 0, 00:14:20.761 "num_base_bdevs_operational": 2, 00:14:20.761 "base_bdevs_list": [ 00:14:20.761 { 00:14:20.761 "name": "BaseBdev1", 00:14:20.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.761 "is_configured": false, 00:14:20.761 "data_offset": 0, 00:14:20.761 "data_size": 0 00:14:20.761 }, 00:14:20.761 { 00:14:20.761 "name": "BaseBdev2", 00:14:20.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.761 "is_configured": false, 00:14:20.761 "data_offset": 0, 00:14:20.761 "data_size": 0 00:14:20.761 } 00:14:20.761 ] 00:14:20.761 }' 00:14:20.761 05:33:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:20.761 05:33:24 -- common/autotest_common.sh@10 -- # set +x 00:14:21.328 05:33:25 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:21.586 [2024-10-07 05:33:25.502104] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:21.586 [2024-10-07 05:33:25.502275] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:14:21.586 05:33:25 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:21.845 [2024-10-07 05:33:25.702124] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:21.845 [2024-10-07 05:33:25.702322] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:21.845 [2024-10-07 05:33:25.702431] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:21.845 [2024-10-07 05:33:25.702513] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:21.845 05:33:25 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:22.103 [2024-10-07 05:33:25.995425] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:22.103 BaseBdev1 00:14:22.103 05:33:26 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:22.103 05:33:26 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:22.103 05:33:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:22.103 05:33:26 -- common/autotest_common.sh@889 -- # local i 00:14:22.103 05:33:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:22.103 05:33:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:22.103 05:33:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:22.362 05:33:26 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:22.621 [ 00:14:22.621 { 00:14:22.621 "name": "BaseBdev1", 00:14:22.621 "aliases": [ 00:14:22.621 "00667653-c640-434c-a9e8-510866b70352" 00:14:22.621 ], 00:14:22.621 "product_name": "Malloc disk", 00:14:22.621 "block_size": 512, 00:14:22.621 "num_blocks": 65536, 00:14:22.621 "uuid": "00667653-c640-434c-a9e8-510866b70352", 00:14:22.621 "assigned_rate_limits": { 00:14:22.621 "rw_ios_per_sec": 0, 00:14:22.621 "rw_mbytes_per_sec": 0, 00:14:22.621 "r_mbytes_per_sec": 0, 00:14:22.621 "w_mbytes_per_sec": 0 00:14:22.621 }, 00:14:22.621 "claimed": true, 00:14:22.621 "claim_type": "exclusive_write", 00:14:22.621 "zoned": false, 00:14:22.621 "supported_io_types": { 00:14:22.621 "read": true, 00:14:22.621 "write": true, 00:14:22.621 "unmap": true, 00:14:22.621 "write_zeroes": true, 00:14:22.621 "flush": true, 00:14:22.621 "reset": true, 00:14:22.621 "compare": false, 00:14:22.621 "compare_and_write": false, 00:14:22.621 "abort": true, 00:14:22.621 "nvme_admin": false, 00:14:22.621 "nvme_io": false 00:14:22.621 }, 00:14:22.621 "memory_domains": [ 00:14:22.621 { 00:14:22.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.621 "dma_device_type": 2 00:14:22.621 } 00:14:22.621 ], 00:14:22.621 "driver_specific": {} 00:14:22.621 } 00:14:22.621 ] 00:14:22.621 05:33:26 -- common/autotest_common.sh@895 -- # return 0 00:14:22.621 05:33:26 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:22.621 05:33:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:22.621 05:33:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:22.621 05:33:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:22.621 05:33:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:22.621 05:33:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:22.621 05:33:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:22.621 05:33:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:22.621 05:33:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:22.621 05:33:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:22.621 05:33:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.621 05:33:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.879 05:33:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:22.879 "name": "Existed_Raid", 00:14:22.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.879 "strip_size_kb": 64, 00:14:22.879 "state": "configuring", 00:14:22.879 "raid_level": "raid0", 00:14:22.879 "superblock": false, 00:14:22.879 "num_base_bdevs": 2, 00:14:22.879 "num_base_bdevs_discovered": 1, 00:14:22.879 "num_base_bdevs_operational": 2, 00:14:22.879 "base_bdevs_list": [ 00:14:22.879 { 00:14:22.879 "name": "BaseBdev1", 00:14:22.879 "uuid": "00667653-c640-434c-a9e8-510866b70352", 00:14:22.879 "is_configured": true, 00:14:22.879 "data_offset": 0, 00:14:22.879 "data_size": 65536 00:14:22.879 }, 00:14:22.879 { 00:14:22.879 "name": "BaseBdev2", 00:14:22.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.879 "is_configured": false, 00:14:22.879 "data_offset": 0, 00:14:22.879 "data_size": 0 00:14:22.879 } 00:14:22.879 ] 00:14:22.879 }' 00:14:22.879 05:33:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:22.879 05:33:26 -- common/autotest_common.sh@10 -- # set +x 00:14:23.445 05:33:27 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:23.445 [2024-10-07 05:33:27.419727] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:23.445 [2024-10-07 05:33:27.419896] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:14:23.704 05:33:27 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:23.704 05:33:27 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:23.704 [2024-10-07 05:33:27.659814] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:23.704 [2024-10-07 05:33:27.661891] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:23.704 [2024-10-07 05:33:27.662076] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:23.704 05:33:27 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:23.704 05:33:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:23.704 05:33:27 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:23.704 05:33:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:23.704 05:33:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:23.704 05:33:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:23.704 05:33:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:23.704 05:33:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:23.704 05:33:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:23.704 05:33:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:23.704 05:33:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:23.704 05:33:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:23.704 05:33:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.704 05:33:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.963 05:33:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:23.963 "name": "Existed_Raid", 00:14:23.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.963 "strip_size_kb": 64, 00:14:23.963 "state": "configuring", 00:14:23.963 "raid_level": "raid0", 00:14:23.963 "superblock": false, 00:14:23.963 "num_base_bdevs": 2, 00:14:23.963 "num_base_bdevs_discovered": 1, 00:14:23.963 "num_base_bdevs_operational": 2, 00:14:23.963 "base_bdevs_list": [ 00:14:23.963 { 00:14:23.963 "name": "BaseBdev1", 00:14:23.963 "uuid": "00667653-c640-434c-a9e8-510866b70352", 00:14:23.963 "is_configured": true, 00:14:23.963 "data_offset": 0, 00:14:23.963 "data_size": 65536 00:14:23.963 }, 00:14:23.963 { 00:14:23.963 "name": "BaseBdev2", 00:14:23.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.963 "is_configured": false, 00:14:23.963 "data_offset": 0, 00:14:23.963 "data_size": 0 00:14:23.963 } 00:14:23.963 ] 00:14:23.963 }' 00:14:23.963 05:33:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:23.963 05:33:27 -- common/autotest_common.sh@10 -- # set +x 00:14:24.530 05:33:28 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:24.789 [2024-10-07 05:33:28.681089] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:24.789 [2024-10-07 05:33:28.681301] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:14:24.789 [2024-10-07 05:33:28.681405] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:24.789 [2024-10-07 05:33:28.681563] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:14:24.789 [2024-10-07 05:33:28.681953] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:14:24.789 [2024-10-07 05:33:28.682085] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:14:24.789 [2024-10-07 05:33:28.682453] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.789 BaseBdev2 00:14:24.789 05:33:28 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:24.789 05:33:28 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:24.789 05:33:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:24.789 05:33:28 -- common/autotest_common.sh@889 -- # local i 00:14:24.789 05:33:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:24.789 05:33:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:24.789 05:33:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:25.048 05:33:28 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:25.306 [ 00:14:25.306 { 00:14:25.306 "name": "BaseBdev2", 00:14:25.306 "aliases": [ 00:14:25.306 "2f712b15-0698-4bda-9685-d3ec79938377" 00:14:25.306 ], 00:14:25.306 "product_name": "Malloc disk", 00:14:25.306 "block_size": 512, 00:14:25.306 "num_blocks": 65536, 00:14:25.306 "uuid": "2f712b15-0698-4bda-9685-d3ec79938377", 00:14:25.306 "assigned_rate_limits": { 00:14:25.306 "rw_ios_per_sec": 0, 00:14:25.306 "rw_mbytes_per_sec": 0, 00:14:25.306 "r_mbytes_per_sec": 0, 00:14:25.306 "w_mbytes_per_sec": 0 00:14:25.306 }, 00:14:25.306 "claimed": true, 00:14:25.306 "claim_type": "exclusive_write", 00:14:25.306 "zoned": false, 00:14:25.306 "supported_io_types": { 00:14:25.306 "read": true, 00:14:25.306 "write": true, 00:14:25.306 "unmap": true, 00:14:25.306 "write_zeroes": true, 00:14:25.306 "flush": true, 00:14:25.306 "reset": true, 00:14:25.306 "compare": false, 00:14:25.306 "compare_and_write": false, 00:14:25.306 "abort": true, 00:14:25.306 "nvme_admin": false, 00:14:25.306 "nvme_io": false 00:14:25.306 }, 00:14:25.306 "memory_domains": [ 00:14:25.306 { 00:14:25.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.306 "dma_device_type": 2 00:14:25.306 } 00:14:25.306 ], 00:14:25.306 "driver_specific": {} 00:14:25.306 } 00:14:25.306 ] 00:14:25.306 05:33:29 -- common/autotest_common.sh@895 -- # return 0 00:14:25.306 05:33:29 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:25.306 05:33:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:25.306 05:33:29 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:25.306 05:33:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:25.306 05:33:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:25.306 05:33:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:25.306 05:33:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:25.306 05:33:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:25.306 05:33:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:25.306 05:33:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:25.306 05:33:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:25.306 05:33:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:25.306 05:33:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.306 05:33:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.306 05:33:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:25.306 "name": "Existed_Raid", 00:14:25.306 "uuid": "8cf052dc-ef3c-4e3a-9887-cac15e4639ff", 00:14:25.306 "strip_size_kb": 64, 00:14:25.306 "state": "online", 00:14:25.306 "raid_level": "raid0", 00:14:25.306 "superblock": false, 00:14:25.306 "num_base_bdevs": 2, 00:14:25.306 "num_base_bdevs_discovered": 2, 00:14:25.306 "num_base_bdevs_operational": 2, 00:14:25.306 "base_bdevs_list": [ 00:14:25.306 { 00:14:25.306 "name": "BaseBdev1", 00:14:25.306 "uuid": "00667653-c640-434c-a9e8-510866b70352", 00:14:25.306 "is_configured": true, 00:14:25.306 "data_offset": 0, 00:14:25.306 "data_size": 65536 00:14:25.306 }, 00:14:25.306 { 00:14:25.306 "name": "BaseBdev2", 00:14:25.306 "uuid": "2f712b15-0698-4bda-9685-d3ec79938377", 00:14:25.306 "is_configured": true, 00:14:25.306 "data_offset": 0, 00:14:25.306 "data_size": 65536 00:14:25.306 } 00:14:25.306 ] 00:14:25.306 }' 00:14:25.306 05:33:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:25.306 05:33:29 -- common/autotest_common.sh@10 -- # set +x 00:14:25.871 05:33:29 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:26.130 [2024-10-07 05:33:29.981484] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:26.130 [2024-10-07 05:33:29.981509] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:26.130 [2024-10-07 05:33:29.981563] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:26.130 05:33:30 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:26.130 05:33:30 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:14:26.130 05:33:30 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:26.130 05:33:30 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:26.130 05:33:30 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:26.130 05:33:30 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:26.130 05:33:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:26.130 05:33:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:26.130 05:33:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:26.130 05:33:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:26.130 05:33:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:26.130 05:33:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:26.130 05:33:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:26.130 05:33:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:26.130 05:33:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:26.130 05:33:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.130 05:33:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.389 05:33:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:26.389 "name": "Existed_Raid", 00:14:26.389 "uuid": "8cf052dc-ef3c-4e3a-9887-cac15e4639ff", 00:14:26.389 "strip_size_kb": 64, 00:14:26.389 "state": "offline", 00:14:26.389 "raid_level": "raid0", 00:14:26.389 "superblock": false, 00:14:26.389 "num_base_bdevs": 2, 00:14:26.389 "num_base_bdevs_discovered": 1, 00:14:26.389 "num_base_bdevs_operational": 1, 00:14:26.389 "base_bdevs_list": [ 00:14:26.389 { 00:14:26.389 "name": null, 00:14:26.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.389 "is_configured": false, 00:14:26.389 "data_offset": 0, 00:14:26.389 "data_size": 65536 00:14:26.389 }, 00:14:26.389 { 00:14:26.389 "name": "BaseBdev2", 00:14:26.389 "uuid": "2f712b15-0698-4bda-9685-d3ec79938377", 00:14:26.389 "is_configured": true, 00:14:26.389 "data_offset": 0, 00:14:26.389 "data_size": 65536 00:14:26.389 } 00:14:26.389 ] 00:14:26.389 }' 00:14:26.389 05:33:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:26.389 05:33:30 -- common/autotest_common.sh@10 -- # set +x 00:14:27.324 05:33:30 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:27.324 05:33:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:27.324 05:33:30 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.324 05:33:30 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:27.324 05:33:31 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:27.324 05:33:31 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:27.324 05:33:31 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:27.582 [2024-10-07 05:33:31.474075] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:27.582 [2024-10-07 05:33:31.474153] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:14:27.840 05:33:31 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:27.840 05:33:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:27.841 05:33:31 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.841 05:33:31 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:28.099 05:33:31 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:28.099 05:33:31 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:28.099 05:33:31 -- bdev/bdev_raid.sh@287 -- # killprocess 135230 00:14:28.099 05:33:31 -- common/autotest_common.sh@926 -- # '[' -z 135230 ']' 00:14:28.099 05:33:31 -- common/autotest_common.sh@930 -- # kill -0 135230 00:14:28.100 05:33:31 -- common/autotest_common.sh@931 -- # uname 00:14:28.100 05:33:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:28.100 05:33:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 135230 00:14:28.100 killing process with pid 135230 00:14:28.100 05:33:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:28.100 05:33:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:28.100 05:33:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 135230' 00:14:28.100 05:33:31 -- common/autotest_common.sh@945 -- # kill 135230 00:14:28.100 05:33:31 -- common/autotest_common.sh@950 -- # wait 135230 00:14:28.100 [2024-10-07 05:33:31.862160] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:28.100 [2024-10-07 05:33:31.862292] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:29.036 ************************************ 00:14:29.036 END TEST raid_state_function_test 00:14:29.036 ************************************ 00:14:29.036 05:33:32 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:29.036 00:14:29.036 real 0m9.717s 00:14:29.036 user 0m16.727s 00:14:29.036 sys 0m1.174s 00:14:29.036 05:33:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:29.036 05:33:32 -- common/autotest_common.sh@10 -- # set +x 00:14:29.036 05:33:32 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:14:29.036 05:33:32 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:29.036 05:33:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:29.036 05:33:32 -- common/autotest_common.sh@10 -- # set +x 00:14:29.036 ************************************ 00:14:29.036 START TEST raid_state_function_test_sb 00:14:29.036 ************************************ 00:14:29.036 05:33:32 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 true 00:14:29.036 05:33:32 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:14:29.036 05:33:32 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:29.036 05:33:32 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:29.036 05:33:32 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:29.036 05:33:32 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:29.036 05:33:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:29.036 05:33:32 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:29.036 05:33:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:29.036 05:33:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:29.036 05:33:32 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:29.036 05:33:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:29.036 05:33:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:29.036 05:33:32 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:29.036 05:33:32 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:29.036 05:33:32 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:29.036 05:33:32 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:29.036 05:33:32 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:29.036 05:33:32 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:29.036 05:33:32 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:14:29.036 05:33:32 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:29.036 05:33:32 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:29.036 05:33:32 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:29.036 05:33:32 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:29.036 05:33:32 -- bdev/bdev_raid.sh@226 -- # raid_pid=135898 00:14:29.036 05:33:32 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 135898' 00:14:29.036 Process raid pid: 135898 00:14:29.036 05:33:32 -- bdev/bdev_raid.sh@228 -- # waitforlisten 135898 /var/tmp/spdk-raid.sock 00:14:29.036 05:33:32 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:29.036 05:33:32 -- common/autotest_common.sh@819 -- # '[' -z 135898 ']' 00:14:29.036 05:33:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:29.036 05:33:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:29.036 05:33:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:29.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:29.036 05:33:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:29.036 05:33:32 -- common/autotest_common.sh@10 -- # set +x 00:14:29.294 [2024-10-07 05:33:33.028493] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:29.294 [2024-10-07 05:33:33.028700] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.294 [2024-10-07 05:33:33.195622] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.553 [2024-10-07 05:33:33.390649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.811 [2024-10-07 05:33:33.582457] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:30.070 05:33:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:30.070 05:33:33 -- common/autotest_common.sh@852 -- # return 0 00:14:30.070 05:33:33 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:30.329 [2024-10-07 05:33:34.170288] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:30.329 [2024-10-07 05:33:34.170366] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:30.329 [2024-10-07 05:33:34.170388] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:30.329 [2024-10-07 05:33:34.170409] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:30.329 05:33:34 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:30.329 05:33:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:30.329 05:33:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:30.329 05:33:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:30.329 05:33:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:30.329 05:33:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:30.329 05:33:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:30.329 05:33:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:30.329 05:33:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:30.329 05:33:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:30.329 05:33:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.329 05:33:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.587 05:33:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:30.587 "name": "Existed_Raid", 00:14:30.587 "uuid": "ecf554d3-81d0-46aa-81d0-28a6f933103c", 00:14:30.587 "strip_size_kb": 64, 00:14:30.587 "state": "configuring", 00:14:30.587 "raid_level": "raid0", 00:14:30.587 "superblock": true, 00:14:30.587 "num_base_bdevs": 2, 00:14:30.587 "num_base_bdevs_discovered": 0, 00:14:30.587 "num_base_bdevs_operational": 2, 00:14:30.587 "base_bdevs_list": [ 00:14:30.587 { 00:14:30.587 "name": "BaseBdev1", 00:14:30.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.587 "is_configured": false, 00:14:30.587 "data_offset": 0, 00:14:30.587 "data_size": 0 00:14:30.587 }, 00:14:30.587 { 00:14:30.587 "name": "BaseBdev2", 00:14:30.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.587 "is_configured": false, 00:14:30.587 "data_offset": 0, 00:14:30.587 "data_size": 0 00:14:30.587 } 00:14:30.587 ] 00:14:30.587 }' 00:14:30.587 05:33:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:30.587 05:33:34 -- common/autotest_common.sh@10 -- # set +x 00:14:31.154 05:33:35 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:31.412 [2024-10-07 05:33:35.306349] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:31.412 [2024-10-07 05:33:35.306394] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:14:31.413 05:33:35 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:31.671 [2024-10-07 05:33:35.550427] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:31.671 [2024-10-07 05:33:35.550507] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:31.671 [2024-10-07 05:33:35.550527] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:31.671 [2024-10-07 05:33:35.550556] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:31.671 05:33:35 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:31.930 [2024-10-07 05:33:35.832193] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:31.930 BaseBdev1 00:14:31.930 05:33:35 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:31.930 05:33:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:31.930 05:33:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:31.930 05:33:35 -- common/autotest_common.sh@889 -- # local i 00:14:31.930 05:33:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:31.930 05:33:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:31.930 05:33:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:32.189 05:33:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:32.447 [ 00:14:32.447 { 00:14:32.447 "name": "BaseBdev1", 00:14:32.447 "aliases": [ 00:14:32.447 "4bc13303-e124-420c-a0a9-ed717323cd30" 00:14:32.447 ], 00:14:32.447 "product_name": "Malloc disk", 00:14:32.447 "block_size": 512, 00:14:32.447 "num_blocks": 65536, 00:14:32.447 "uuid": "4bc13303-e124-420c-a0a9-ed717323cd30", 00:14:32.447 "assigned_rate_limits": { 00:14:32.447 "rw_ios_per_sec": 0, 00:14:32.447 "rw_mbytes_per_sec": 0, 00:14:32.447 "r_mbytes_per_sec": 0, 00:14:32.447 "w_mbytes_per_sec": 0 00:14:32.447 }, 00:14:32.447 "claimed": true, 00:14:32.447 "claim_type": "exclusive_write", 00:14:32.447 "zoned": false, 00:14:32.447 "supported_io_types": { 00:14:32.447 "read": true, 00:14:32.447 "write": true, 00:14:32.447 "unmap": true, 00:14:32.447 "write_zeroes": true, 00:14:32.447 "flush": true, 00:14:32.447 "reset": true, 00:14:32.447 "compare": false, 00:14:32.447 "compare_and_write": false, 00:14:32.447 "abort": true, 00:14:32.447 "nvme_admin": false, 00:14:32.447 "nvme_io": false 00:14:32.447 }, 00:14:32.447 "memory_domains": [ 00:14:32.447 { 00:14:32.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.447 "dma_device_type": 2 00:14:32.447 } 00:14:32.447 ], 00:14:32.447 "driver_specific": {} 00:14:32.447 } 00:14:32.447 ] 00:14:32.447 05:33:36 -- common/autotest_common.sh@895 -- # return 0 00:14:32.447 05:33:36 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:32.447 05:33:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:32.447 05:33:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:32.447 05:33:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:32.447 05:33:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:32.447 05:33:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:32.448 05:33:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:32.448 05:33:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:32.448 05:33:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:32.448 05:33:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:32.448 05:33:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.448 05:33:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.448 05:33:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:32.448 "name": "Existed_Raid", 00:14:32.448 "uuid": "6f04d5c5-9e11-4bc1-9e70-21b4baf974ab", 00:14:32.448 "strip_size_kb": 64, 00:14:32.448 "state": "configuring", 00:14:32.448 "raid_level": "raid0", 00:14:32.448 "superblock": true, 00:14:32.448 "num_base_bdevs": 2, 00:14:32.448 "num_base_bdevs_discovered": 1, 00:14:32.448 "num_base_bdevs_operational": 2, 00:14:32.448 "base_bdevs_list": [ 00:14:32.448 { 00:14:32.448 "name": "BaseBdev1", 00:14:32.448 "uuid": "4bc13303-e124-420c-a0a9-ed717323cd30", 00:14:32.448 "is_configured": true, 00:14:32.448 "data_offset": 2048, 00:14:32.448 "data_size": 63488 00:14:32.448 }, 00:14:32.448 { 00:14:32.448 "name": "BaseBdev2", 00:14:32.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.448 "is_configured": false, 00:14:32.448 "data_offset": 0, 00:14:32.448 "data_size": 0 00:14:32.448 } 00:14:32.448 ] 00:14:32.448 }' 00:14:32.448 05:33:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:32.448 05:33:36 -- common/autotest_common.sh@10 -- # set +x 00:14:33.387 05:33:37 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:33.387 [2024-10-07 05:33:37.288485] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:33.387 [2024-10-07 05:33:37.288538] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:14:33.387 05:33:37 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:14:33.387 05:33:37 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:33.656 05:33:37 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:33.944 BaseBdev1 00:14:33.944 05:33:37 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:14:33.944 05:33:37 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:33.944 05:33:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:33.944 05:33:37 -- common/autotest_common.sh@889 -- # local i 00:14:33.944 05:33:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:33.944 05:33:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:33.944 05:33:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:34.202 05:33:38 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:34.464 [ 00:14:34.464 { 00:14:34.464 "name": "BaseBdev1", 00:14:34.464 "aliases": [ 00:14:34.464 "3b282641-a68e-4c75-8157-b7e3fdd025b6" 00:14:34.464 ], 00:14:34.464 "product_name": "Malloc disk", 00:14:34.465 "block_size": 512, 00:14:34.465 "num_blocks": 65536, 00:14:34.465 "uuid": "3b282641-a68e-4c75-8157-b7e3fdd025b6", 00:14:34.465 "assigned_rate_limits": { 00:14:34.465 "rw_ios_per_sec": 0, 00:14:34.465 "rw_mbytes_per_sec": 0, 00:14:34.465 "r_mbytes_per_sec": 0, 00:14:34.465 "w_mbytes_per_sec": 0 00:14:34.465 }, 00:14:34.465 "claimed": false, 00:14:34.465 "zoned": false, 00:14:34.465 "supported_io_types": { 00:14:34.465 "read": true, 00:14:34.465 "write": true, 00:14:34.465 "unmap": true, 00:14:34.465 "write_zeroes": true, 00:14:34.465 "flush": true, 00:14:34.465 "reset": true, 00:14:34.465 "compare": false, 00:14:34.465 "compare_and_write": false, 00:14:34.465 "abort": true, 00:14:34.465 "nvme_admin": false, 00:14:34.465 "nvme_io": false 00:14:34.465 }, 00:14:34.465 "memory_domains": [ 00:14:34.465 { 00:14:34.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.465 "dma_device_type": 2 00:14:34.465 } 00:14:34.465 ], 00:14:34.465 "driver_specific": {} 00:14:34.465 } 00:14:34.465 ] 00:14:34.465 05:33:38 -- common/autotest_common.sh@895 -- # return 0 00:14:34.465 05:33:38 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:34.465 [2024-10-07 05:33:38.414324] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:34.465 [2024-10-07 05:33:38.416602] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:34.465 [2024-10-07 05:33:38.416665] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:34.465 05:33:38 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:34.465 05:33:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:34.465 05:33:38 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:34.465 05:33:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:34.465 05:33:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:34.465 05:33:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:34.465 05:33:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:34.465 05:33:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:34.465 05:33:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:34.465 05:33:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:34.465 05:33:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:34.465 05:33:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:34.465 05:33:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:34.465 05:33:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.032 05:33:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:35.032 "name": "Existed_Raid", 00:14:35.032 "uuid": "3ec2a99b-5489-4d97-9d97-e67e4543d738", 00:14:35.032 "strip_size_kb": 64, 00:14:35.032 "state": "configuring", 00:14:35.032 "raid_level": "raid0", 00:14:35.032 "superblock": true, 00:14:35.032 "num_base_bdevs": 2, 00:14:35.032 "num_base_bdevs_discovered": 1, 00:14:35.032 "num_base_bdevs_operational": 2, 00:14:35.032 "base_bdevs_list": [ 00:14:35.032 { 00:14:35.032 "name": "BaseBdev1", 00:14:35.032 "uuid": "3b282641-a68e-4c75-8157-b7e3fdd025b6", 00:14:35.032 "is_configured": true, 00:14:35.032 "data_offset": 2048, 00:14:35.032 "data_size": 63488 00:14:35.032 }, 00:14:35.032 { 00:14:35.032 "name": "BaseBdev2", 00:14:35.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.032 "is_configured": false, 00:14:35.032 "data_offset": 0, 00:14:35.032 "data_size": 0 00:14:35.032 } 00:14:35.032 ] 00:14:35.032 }' 00:14:35.032 05:33:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:35.032 05:33:38 -- common/autotest_common.sh@10 -- # set +x 00:14:35.599 05:33:39 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:35.858 [2024-10-07 05:33:39.675214] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:35.858 [2024-10-07 05:33:39.675468] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:14:35.858 [2024-10-07 05:33:39.675483] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:35.858 [2024-10-07 05:33:39.675635] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:14:35.858 [2024-10-07 05:33:39.676026] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:14:35.858 [2024-10-07 05:33:39.676050] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:14:35.858 BaseBdev2 00:14:35.858 [2024-10-07 05:33:39.676237] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.858 05:33:39 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:35.858 05:33:39 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:35.858 05:33:39 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:35.858 05:33:39 -- common/autotest_common.sh@889 -- # local i 00:14:35.858 05:33:39 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:35.858 05:33:39 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:35.858 05:33:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:36.117 05:33:39 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:36.375 [ 00:14:36.375 { 00:14:36.375 "name": "BaseBdev2", 00:14:36.375 "aliases": [ 00:14:36.375 "d32ddcd2-0c9f-4790-bf17-306e5df887e9" 00:14:36.375 ], 00:14:36.375 "product_name": "Malloc disk", 00:14:36.375 "block_size": 512, 00:14:36.375 "num_blocks": 65536, 00:14:36.375 "uuid": "d32ddcd2-0c9f-4790-bf17-306e5df887e9", 00:14:36.375 "assigned_rate_limits": { 00:14:36.375 "rw_ios_per_sec": 0, 00:14:36.375 "rw_mbytes_per_sec": 0, 00:14:36.375 "r_mbytes_per_sec": 0, 00:14:36.375 "w_mbytes_per_sec": 0 00:14:36.375 }, 00:14:36.375 "claimed": true, 00:14:36.375 "claim_type": "exclusive_write", 00:14:36.375 "zoned": false, 00:14:36.375 "supported_io_types": { 00:14:36.375 "read": true, 00:14:36.375 "write": true, 00:14:36.375 "unmap": true, 00:14:36.375 "write_zeroes": true, 00:14:36.375 "flush": true, 00:14:36.375 "reset": true, 00:14:36.375 "compare": false, 00:14:36.375 "compare_and_write": false, 00:14:36.375 "abort": true, 00:14:36.375 "nvme_admin": false, 00:14:36.375 "nvme_io": false 00:14:36.375 }, 00:14:36.375 "memory_domains": [ 00:14:36.375 { 00:14:36.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.375 "dma_device_type": 2 00:14:36.375 } 00:14:36.375 ], 00:14:36.375 "driver_specific": {} 00:14:36.375 } 00:14:36.375 ] 00:14:36.375 05:33:40 -- common/autotest_common.sh@895 -- # return 0 00:14:36.375 05:33:40 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:36.375 05:33:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:36.375 05:33:40 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:36.375 05:33:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:36.375 05:33:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:36.375 05:33:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:36.375 05:33:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:36.375 05:33:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:36.375 05:33:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:36.375 05:33:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:36.375 05:33:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:36.375 05:33:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:36.375 05:33:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.375 05:33:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.375 05:33:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:36.375 "name": "Existed_Raid", 00:14:36.375 "uuid": "3ec2a99b-5489-4d97-9d97-e67e4543d738", 00:14:36.375 "strip_size_kb": 64, 00:14:36.375 "state": "online", 00:14:36.375 "raid_level": "raid0", 00:14:36.375 "superblock": true, 00:14:36.375 "num_base_bdevs": 2, 00:14:36.375 "num_base_bdevs_discovered": 2, 00:14:36.375 "num_base_bdevs_operational": 2, 00:14:36.375 "base_bdevs_list": [ 00:14:36.375 { 00:14:36.375 "name": "BaseBdev1", 00:14:36.375 "uuid": "3b282641-a68e-4c75-8157-b7e3fdd025b6", 00:14:36.375 "is_configured": true, 00:14:36.375 "data_offset": 2048, 00:14:36.375 "data_size": 63488 00:14:36.375 }, 00:14:36.375 { 00:14:36.375 "name": "BaseBdev2", 00:14:36.375 "uuid": "d32ddcd2-0c9f-4790-bf17-306e5df887e9", 00:14:36.375 "is_configured": true, 00:14:36.375 "data_offset": 2048, 00:14:36.375 "data_size": 63488 00:14:36.375 } 00:14:36.375 ] 00:14:36.375 }' 00:14:36.375 05:33:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:36.375 05:33:40 -- common/autotest_common.sh@10 -- # set +x 00:14:36.941 05:33:40 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:37.199 [2024-10-07 05:33:41.163123] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:37.199 [2024-10-07 05:33:41.163163] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:37.199 [2024-10-07 05:33:41.163235] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.458 05:33:41 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:37.458 05:33:41 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:14:37.458 05:33:41 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:37.458 05:33:41 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:37.458 05:33:41 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:37.458 05:33:41 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:37.458 05:33:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:37.458 05:33:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:37.458 05:33:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:37.458 05:33:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:37.458 05:33:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:37.458 05:33:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:37.458 05:33:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:37.458 05:33:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:37.458 05:33:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:37.458 05:33:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.458 05:33:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.717 05:33:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:37.717 "name": "Existed_Raid", 00:14:37.717 "uuid": "3ec2a99b-5489-4d97-9d97-e67e4543d738", 00:14:37.717 "strip_size_kb": 64, 00:14:37.717 "state": "offline", 00:14:37.717 "raid_level": "raid0", 00:14:37.717 "superblock": true, 00:14:37.717 "num_base_bdevs": 2, 00:14:37.717 "num_base_bdevs_discovered": 1, 00:14:37.717 "num_base_bdevs_operational": 1, 00:14:37.717 "base_bdevs_list": [ 00:14:37.717 { 00:14:37.717 "name": null, 00:14:37.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.717 "is_configured": false, 00:14:37.717 "data_offset": 2048, 00:14:37.717 "data_size": 63488 00:14:37.717 }, 00:14:37.717 { 00:14:37.717 "name": "BaseBdev2", 00:14:37.717 "uuid": "d32ddcd2-0c9f-4790-bf17-306e5df887e9", 00:14:37.717 "is_configured": true, 00:14:37.717 "data_offset": 2048, 00:14:37.717 "data_size": 63488 00:14:37.717 } 00:14:37.717 ] 00:14:37.717 }' 00:14:37.717 05:33:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:37.717 05:33:41 -- common/autotest_common.sh@10 -- # set +x 00:14:37.976 05:33:41 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:37.976 05:33:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:37.976 05:33:41 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.976 05:33:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:38.544 05:33:42 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:38.544 05:33:42 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:38.544 05:33:42 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:38.544 [2024-10-07 05:33:42.421740] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:38.544 [2024-10-07 05:33:42.421836] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:14:38.544 05:33:42 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:38.544 05:33:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:38.544 05:33:42 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.544 05:33:42 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:38.809 05:33:42 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:38.809 05:33:42 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:38.809 05:33:42 -- bdev/bdev_raid.sh@287 -- # killprocess 135898 00:14:38.809 05:33:42 -- common/autotest_common.sh@926 -- # '[' -z 135898 ']' 00:14:38.809 05:33:42 -- common/autotest_common.sh@930 -- # kill -0 135898 00:14:38.809 05:33:42 -- common/autotest_common.sh@931 -- # uname 00:14:38.809 05:33:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:38.809 05:33:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 135898 00:14:38.809 killing process with pid 135898 00:14:38.809 05:33:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:38.809 05:33:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:38.809 05:33:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 135898' 00:14:38.809 05:33:42 -- common/autotest_common.sh@945 -- # kill 135898 00:14:38.809 05:33:42 -- common/autotest_common.sh@950 -- # wait 135898 00:14:38.809 [2024-10-07 05:33:42.725349] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:38.809 [2024-10-07 05:33:42.725476] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:40.186 05:33:43 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:40.186 00:14:40.186 real 0m10.834s 00:14:40.186 user 0m18.823s 00:14:40.186 sys 0m1.257s 00:14:40.186 05:33:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:40.186 05:33:43 -- common/autotest_common.sh@10 -- # set +x 00:14:40.186 ************************************ 00:14:40.186 END TEST raid_state_function_test_sb 00:14:40.186 ************************************ 00:14:40.186 05:33:43 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:14:40.186 05:33:43 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:14:40.186 05:33:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:40.186 05:33:43 -- common/autotest_common.sh@10 -- # set +x 00:14:40.186 ************************************ 00:14:40.186 START TEST raid_superblock_test 00:14:40.186 ************************************ 00:14:40.186 05:33:43 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 2 00:14:40.186 05:33:43 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:14:40.186 05:33:43 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:14:40.186 05:33:43 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:14:40.186 05:33:43 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:14:40.186 05:33:43 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:14:40.186 05:33:43 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:14:40.186 05:33:43 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:14:40.186 05:33:43 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:14:40.186 05:33:43 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:14:40.186 05:33:43 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:14:40.186 05:33:43 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:14:40.186 05:33:43 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:14:40.186 05:33:43 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:14:40.186 05:33:43 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:14:40.186 05:33:43 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:14:40.186 05:33:43 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:14:40.186 05:33:43 -- bdev/bdev_raid.sh@357 -- # raid_pid=136517 00:14:40.186 05:33:43 -- bdev/bdev_raid.sh@358 -- # waitforlisten 136517 /var/tmp/spdk-raid.sock 00:14:40.186 05:33:43 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:40.186 05:33:43 -- common/autotest_common.sh@819 -- # '[' -z 136517 ']' 00:14:40.186 05:33:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:40.186 05:33:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:40.186 05:33:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:40.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:40.186 05:33:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:40.186 05:33:43 -- common/autotest_common.sh@10 -- # set +x 00:14:40.186 [2024-10-07 05:33:43.902261] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:40.186 [2024-10-07 05:33:43.902398] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136517 ] 00:14:40.186 [2024-10-07 05:33:44.057944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.445 [2024-10-07 05:33:44.316269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.703 [2024-10-07 05:33:44.507503] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.961 05:33:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:40.961 05:33:44 -- common/autotest_common.sh@852 -- # return 0 00:14:40.961 05:33:44 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:14:40.961 05:33:44 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:40.961 05:33:44 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:14:40.961 05:33:44 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:14:40.961 05:33:44 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:40.961 05:33:44 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:40.961 05:33:44 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:40.961 05:33:44 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:40.961 05:33:44 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:41.220 malloc1 00:14:41.220 05:33:45 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:41.478 [2024-10-07 05:33:45.237953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:41.478 [2024-10-07 05:33:45.238198] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.478 [2024-10-07 05:33:45.238276] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:14:41.478 [2024-10-07 05:33:45.238435] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.478 [2024-10-07 05:33:45.240899] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.478 [2024-10-07 05:33:45.241074] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:41.478 pt1 00:14:41.478 05:33:45 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:41.478 05:33:45 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:41.478 05:33:45 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:14:41.478 05:33:45 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:14:41.478 05:33:45 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:41.478 05:33:45 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:41.479 05:33:45 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:41.479 05:33:45 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:41.479 05:33:45 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:41.736 malloc2 00:14:41.736 05:33:45 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:41.995 [2024-10-07 05:33:45.730726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:41.995 [2024-10-07 05:33:45.730926] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.995 [2024-10-07 05:33:45.731008] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:41.995 [2024-10-07 05:33:45.731168] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.995 [2024-10-07 05:33:45.733497] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.995 [2024-10-07 05:33:45.733688] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:41.995 pt2 00:14:41.995 05:33:45 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:41.995 05:33:45 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:41.995 05:33:45 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:14:41.995 [2024-10-07 05:33:45.914977] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:41.995 [2024-10-07 05:33:45.917093] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:41.995 [2024-10-07 05:33:45.917470] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80 00:14:41.995 [2024-10-07 05:33:45.917624] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:41.995 [2024-10-07 05:33:45.917857] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:14:41.995 [2024-10-07 05:33:45.918316] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80 00:14:41.995 [2024-10-07 05:33:45.918440] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80 00:14:41.995 [2024-10-07 05:33:45.918830] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.995 05:33:45 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:41.995 05:33:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:41.995 05:33:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:41.995 05:33:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:41.995 05:33:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:41.995 05:33:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:41.995 05:33:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:41.995 05:33:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:41.995 05:33:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:41.995 05:33:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:41.995 05:33:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.995 05:33:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.253 05:33:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:42.253 "name": "raid_bdev1", 00:14:42.253 "uuid": "1a2fb2b3-558f-45a3-8cb2-f4fc379b35d5", 00:14:42.253 "strip_size_kb": 64, 00:14:42.253 "state": "online", 00:14:42.253 "raid_level": "raid0", 00:14:42.253 "superblock": true, 00:14:42.253 "num_base_bdevs": 2, 00:14:42.253 "num_base_bdevs_discovered": 2, 00:14:42.253 "num_base_bdevs_operational": 2, 00:14:42.253 "base_bdevs_list": [ 00:14:42.253 { 00:14:42.253 "name": "pt1", 00:14:42.253 "uuid": "84e30fb7-cf3a-537b-80a1-fe5c20d12d2d", 00:14:42.253 "is_configured": true, 00:14:42.253 "data_offset": 2048, 00:14:42.253 "data_size": 63488 00:14:42.253 }, 00:14:42.253 { 00:14:42.253 "name": "pt2", 00:14:42.254 "uuid": "7ea2351a-435c-532f-8f69-0056ca52c458", 00:14:42.254 "is_configured": true, 00:14:42.254 "data_offset": 2048, 00:14:42.254 "data_size": 63488 00:14:42.254 } 00:14:42.254 ] 00:14:42.254 }' 00:14:42.254 05:33:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:42.254 05:33:46 -- common/autotest_common.sh@10 -- # set +x 00:14:43.189 05:33:46 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:43.189 05:33:46 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:14:43.189 [2024-10-07 05:33:47.111482] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:43.189 05:33:47 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=1a2fb2b3-558f-45a3-8cb2-f4fc379b35d5 00:14:43.189 05:33:47 -- bdev/bdev_raid.sh@380 -- # '[' -z 1a2fb2b3-558f-45a3-8cb2-f4fc379b35d5 ']' 00:14:43.189 05:33:47 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:43.447 [2024-10-07 05:33:47.375350] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:43.447 [2024-10-07 05:33:47.375685] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:43.447 [2024-10-07 05:33:47.375891] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:43.447 [2024-10-07 05:33:47.376065] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:43.447 [2024-10-07 05:33:47.376196] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline 00:14:43.447 05:33:47 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:43.447 05:33:47 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:14:43.706 05:33:47 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:14:43.706 05:33:47 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:14:43.706 05:33:47 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:43.706 05:33:47 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:43.964 05:33:47 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:43.964 05:33:47 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:44.223 05:33:48 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:44.223 05:33:48 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:44.482 05:33:48 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:14:44.482 05:33:48 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:44.482 05:33:48 -- common/autotest_common.sh@640 -- # local es=0 00:14:44.482 05:33:48 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:44.482 05:33:48 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:44.482 05:33:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:44.482 05:33:48 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:44.482 05:33:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:44.482 05:33:48 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:44.482 05:33:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:44.482 05:33:48 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:44.482 05:33:48 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:44.482 05:33:48 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:44.740 [2024-10-07 05:33:48.575693] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:44.740 [2024-10-07 05:33:48.577993] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:44.740 [2024-10-07 05:33:48.578201] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:14:44.740 [2024-10-07 05:33:48.578410] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:14:44.740 [2024-10-07 05:33:48.578612] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:44.740 [2024-10-07 05:33:48.578660] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring 00:14:44.740 request: 00:14:44.740 { 00:14:44.740 "name": "raid_bdev1", 00:14:44.740 "raid_level": "raid0", 00:14:44.740 "base_bdevs": [ 00:14:44.740 "malloc1", 00:14:44.740 "malloc2" 00:14:44.740 ], 00:14:44.740 "superblock": false, 00:14:44.740 "strip_size_kb": 64, 00:14:44.740 "method": "bdev_raid_create", 00:14:44.740 "req_id": 1 00:14:44.740 } 00:14:44.740 Got JSON-RPC error response 00:14:44.740 response: 00:14:44.740 { 00:14:44.740 "code": -17, 00:14:44.740 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:44.740 } 00:14:44.740 05:33:48 -- common/autotest_common.sh@643 -- # es=1 00:14:44.740 05:33:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:44.740 05:33:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:44.740 05:33:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:44.740 05:33:48 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.740 05:33:48 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:14:44.998 05:33:48 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:14:44.998 05:33:48 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:14:44.998 05:33:48 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:45.257 [2024-10-07 05:33:49.047725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:45.257 [2024-10-07 05:33:49.048138] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.257 [2024-10-07 05:33:49.048222] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:14:45.257 [2024-10-07 05:33:49.048484] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.257 [2024-10-07 05:33:49.051022] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.257 [2024-10-07 05:33:49.051208] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:45.257 [2024-10-07 05:33:49.051444] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:45.257 [2024-10-07 05:33:49.051613] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:45.257 pt1 00:14:45.257 05:33:49 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:14:45.257 05:33:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:45.257 05:33:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:45.257 05:33:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:45.257 05:33:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:45.257 05:33:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:45.257 05:33:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:45.257 05:33:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:45.257 05:33:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:45.257 05:33:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:45.257 05:33:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.257 05:33:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.516 05:33:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:45.516 "name": "raid_bdev1", 00:14:45.516 "uuid": "1a2fb2b3-558f-45a3-8cb2-f4fc379b35d5", 00:14:45.516 "strip_size_kb": 64, 00:14:45.516 "state": "configuring", 00:14:45.516 "raid_level": "raid0", 00:14:45.516 "superblock": true, 00:14:45.516 "num_base_bdevs": 2, 00:14:45.516 "num_base_bdevs_discovered": 1, 00:14:45.516 "num_base_bdevs_operational": 2, 00:14:45.516 "base_bdevs_list": [ 00:14:45.516 { 00:14:45.516 "name": "pt1", 00:14:45.516 "uuid": "84e30fb7-cf3a-537b-80a1-fe5c20d12d2d", 00:14:45.516 "is_configured": true, 00:14:45.516 "data_offset": 2048, 00:14:45.516 "data_size": 63488 00:14:45.516 }, 00:14:45.516 { 00:14:45.516 "name": null, 00:14:45.516 "uuid": "7ea2351a-435c-532f-8f69-0056ca52c458", 00:14:45.516 "is_configured": false, 00:14:45.516 "data_offset": 2048, 00:14:45.516 "data_size": 63488 00:14:45.516 } 00:14:45.516 ] 00:14:45.516 }' 00:14:45.516 05:33:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:45.516 05:33:49 -- common/autotest_common.sh@10 -- # set +x 00:14:46.084 05:33:49 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:14:46.084 05:33:49 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:14:46.084 05:33:49 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:46.084 05:33:49 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:46.342 [2024-10-07 05:33:50.148209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:46.342 [2024-10-07 05:33:50.148329] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:46.342 [2024-10-07 05:33:50.148375] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:46.342 [2024-10-07 05:33:50.148402] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:46.342 [2024-10-07 05:33:50.148922] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:46.342 [2024-10-07 05:33:50.148965] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:46.342 [2024-10-07 05:33:50.149074] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:46.342 [2024-10-07 05:33:50.149100] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:46.342 [2024-10-07 05:33:50.149224] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:14:46.342 [2024-10-07 05:33:50.149236] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:46.342 [2024-10-07 05:33:50.149346] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:14:46.342 [2024-10-07 05:33:50.149695] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:14:46.342 [2024-10-07 05:33:50.149709] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:14:46.342 [2024-10-07 05:33:50.149851] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.342 pt2 00:14:46.342 05:33:50 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:46.342 05:33:50 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:46.342 05:33:50 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:46.342 05:33:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:46.342 05:33:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:46.342 05:33:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:46.342 05:33:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:46.342 05:33:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:46.342 05:33:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:46.342 05:33:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:46.342 05:33:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:46.342 05:33:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:46.342 05:33:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.342 05:33:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.601 05:33:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:46.601 "name": "raid_bdev1", 00:14:46.601 "uuid": "1a2fb2b3-558f-45a3-8cb2-f4fc379b35d5", 00:14:46.601 "strip_size_kb": 64, 00:14:46.601 "state": "online", 00:14:46.601 "raid_level": "raid0", 00:14:46.601 "superblock": true, 00:14:46.601 "num_base_bdevs": 2, 00:14:46.601 "num_base_bdevs_discovered": 2, 00:14:46.601 "num_base_bdevs_operational": 2, 00:14:46.601 "base_bdevs_list": [ 00:14:46.601 { 00:14:46.601 "name": "pt1", 00:14:46.601 "uuid": "84e30fb7-cf3a-537b-80a1-fe5c20d12d2d", 00:14:46.601 "is_configured": true, 00:14:46.601 "data_offset": 2048, 00:14:46.601 "data_size": 63488 00:14:46.601 }, 00:14:46.601 { 00:14:46.601 "name": "pt2", 00:14:46.601 "uuid": "7ea2351a-435c-532f-8f69-0056ca52c458", 00:14:46.601 "is_configured": true, 00:14:46.601 "data_offset": 2048, 00:14:46.601 "data_size": 63488 00:14:46.601 } 00:14:46.601 ] 00:14:46.601 }' 00:14:46.601 05:33:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:46.601 05:33:50 -- common/autotest_common.sh@10 -- # set +x 00:14:47.166 05:33:51 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:14:47.166 05:33:51 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:47.425 [2024-10-07 05:33:51.194754] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:47.425 05:33:51 -- bdev/bdev_raid.sh@430 -- # '[' 1a2fb2b3-558f-45a3-8cb2-f4fc379b35d5 '!=' 1a2fb2b3-558f-45a3-8cb2-f4fc379b35d5 ']' 00:14:47.425 05:33:51 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:14:47.425 05:33:51 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:47.425 05:33:51 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:47.425 05:33:51 -- bdev/bdev_raid.sh@511 -- # killprocess 136517 00:14:47.425 05:33:51 -- common/autotest_common.sh@926 -- # '[' -z 136517 ']' 00:14:47.425 05:33:51 -- common/autotest_common.sh@930 -- # kill -0 136517 00:14:47.425 05:33:51 -- common/autotest_common.sh@931 -- # uname 00:14:47.425 05:33:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:47.425 05:33:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 136517 00:14:47.425 killing process with pid 136517 00:14:47.425 05:33:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:47.425 05:33:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:47.425 05:33:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 136517' 00:14:47.425 05:33:51 -- common/autotest_common.sh@945 -- # kill 136517 00:14:47.425 [2024-10-07 05:33:51.230062] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:47.425 05:33:51 -- common/autotest_common.sh@950 -- # wait 136517 00:14:47.425 [2024-10-07 05:33:51.230135] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:47.425 [2024-10-07 05:33:51.230186] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:47.425 [2024-10-07 05:33:51.230197] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:14:47.425 [2024-10-07 05:33:51.373460] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:48.831 ************************************ 00:14:48.831 END TEST raid_superblock_test 00:14:48.831 ************************************ 00:14:48.831 05:33:52 -- bdev/bdev_raid.sh@513 -- # return 0 00:14:48.831 00:14:48.831 real 0m8.583s 00:14:48.831 user 0m14.474s 00:14:48.831 sys 0m1.108s 00:14:48.831 05:33:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:48.831 05:33:52 -- common/autotest_common.sh@10 -- # set +x 00:14:48.831 05:33:52 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:48.831 05:33:52 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:14:48.831 05:33:52 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:48.831 05:33:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:48.831 05:33:52 -- common/autotest_common.sh@10 -- # set +x 00:14:48.831 ************************************ 00:14:48.831 START TEST raid_state_function_test 00:14:48.831 ************************************ 00:14:48.831 05:33:52 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 false 00:14:48.831 05:33:52 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:14:48.831 05:33:52 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:48.831 05:33:52 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:48.831 05:33:52 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:48.831 05:33:52 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:48.831 05:33:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:48.831 05:33:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:48.831 05:33:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:48.831 05:33:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:48.831 05:33:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:48.831 05:33:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:48.831 05:33:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:48.831 05:33:52 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:48.831 05:33:52 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:48.831 05:33:52 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:48.831 05:33:52 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:48.831 05:33:52 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:48.831 05:33:52 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:48.831 05:33:52 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:14:48.831 05:33:52 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:48.831 05:33:52 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:48.831 05:33:52 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:48.831 05:33:52 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:48.831 05:33:52 -- bdev/bdev_raid.sh@226 -- # raid_pid=137020 00:14:48.831 Process raid pid: 137020 00:14:48.831 05:33:52 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 137020' 00:14:48.831 05:33:52 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:48.831 05:33:52 -- bdev/bdev_raid.sh@228 -- # waitforlisten 137020 /var/tmp/spdk-raid.sock 00:14:48.831 05:33:52 -- common/autotest_common.sh@819 -- # '[' -z 137020 ']' 00:14:48.831 05:33:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:48.831 05:33:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:48.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:48.831 05:33:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:48.831 05:33:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:48.831 05:33:52 -- common/autotest_common.sh@10 -- # set +x 00:14:48.831 [2024-10-07 05:33:52.563592] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:48.831 [2024-10-07 05:33:52.563807] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.831 [2024-10-07 05:33:52.727984] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.090 [2024-10-07 05:33:52.934775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.348 [2024-10-07 05:33:53.132179] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:49.607 05:33:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:49.607 05:33:53 -- common/autotest_common.sh@852 -- # return 0 00:14:49.607 05:33:53 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:49.865 [2024-10-07 05:33:53.629192] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:49.865 [2024-10-07 05:33:53.629270] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:49.865 [2024-10-07 05:33:53.629303] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:49.865 [2024-10-07 05:33:53.629325] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:49.865 05:33:53 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:49.865 05:33:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:49.865 05:33:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:49.865 05:33:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:49.865 05:33:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:49.865 05:33:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:49.865 05:33:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:49.865 05:33:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:49.865 05:33:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:49.865 05:33:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:49.865 05:33:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.865 05:33:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.124 05:33:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:50.124 "name": "Existed_Raid", 00:14:50.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.124 "strip_size_kb": 64, 00:14:50.124 "state": "configuring", 00:14:50.124 "raid_level": "concat", 00:14:50.124 "superblock": false, 00:14:50.124 "num_base_bdevs": 2, 00:14:50.124 "num_base_bdevs_discovered": 0, 00:14:50.124 "num_base_bdevs_operational": 2, 00:14:50.124 "base_bdevs_list": [ 00:14:50.124 { 00:14:50.124 "name": "BaseBdev1", 00:14:50.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.124 "is_configured": false, 00:14:50.124 "data_offset": 0, 00:14:50.124 "data_size": 0 00:14:50.124 }, 00:14:50.124 { 00:14:50.124 "name": "BaseBdev2", 00:14:50.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.124 "is_configured": false, 00:14:50.124 "data_offset": 0, 00:14:50.124 "data_size": 0 00:14:50.124 } 00:14:50.124 ] 00:14:50.124 }' 00:14:50.124 05:33:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:50.124 05:33:53 -- common/autotest_common.sh@10 -- # set +x 00:14:50.690 05:33:54 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:50.948 [2024-10-07 05:33:54.721314] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:50.948 [2024-10-07 05:33:54.721382] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:14:50.948 05:33:54 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:50.948 [2024-10-07 05:33:54.913349] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:50.948 [2024-10-07 05:33:54.913466] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:50.948 [2024-10-07 05:33:54.913482] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:50.948 [2024-10-07 05:33:54.913513] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:51.207 05:33:54 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:51.207 [2024-10-07 05:33:55.152015] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:51.207 BaseBdev1 00:14:51.207 05:33:55 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:51.207 05:33:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:51.207 05:33:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:51.207 05:33:55 -- common/autotest_common.sh@889 -- # local i 00:14:51.207 05:33:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:51.207 05:33:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:51.207 05:33:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:51.466 05:33:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:51.724 [ 00:14:51.724 { 00:14:51.724 "name": "BaseBdev1", 00:14:51.724 "aliases": [ 00:14:51.724 "d6cfa02b-ceb7-4507-9b6f-870e78622120" 00:14:51.724 ], 00:14:51.724 "product_name": "Malloc disk", 00:14:51.724 "block_size": 512, 00:14:51.724 "num_blocks": 65536, 00:14:51.724 "uuid": "d6cfa02b-ceb7-4507-9b6f-870e78622120", 00:14:51.724 "assigned_rate_limits": { 00:14:51.724 "rw_ios_per_sec": 0, 00:14:51.724 "rw_mbytes_per_sec": 0, 00:14:51.724 "r_mbytes_per_sec": 0, 00:14:51.724 "w_mbytes_per_sec": 0 00:14:51.724 }, 00:14:51.724 "claimed": true, 00:14:51.724 "claim_type": "exclusive_write", 00:14:51.724 "zoned": false, 00:14:51.724 "supported_io_types": { 00:14:51.724 "read": true, 00:14:51.724 "write": true, 00:14:51.724 "unmap": true, 00:14:51.724 "write_zeroes": true, 00:14:51.724 "flush": true, 00:14:51.724 "reset": true, 00:14:51.724 "compare": false, 00:14:51.724 "compare_and_write": false, 00:14:51.724 "abort": true, 00:14:51.724 "nvme_admin": false, 00:14:51.724 "nvme_io": false 00:14:51.724 }, 00:14:51.724 "memory_domains": [ 00:14:51.724 { 00:14:51.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.724 "dma_device_type": 2 00:14:51.724 } 00:14:51.724 ], 00:14:51.724 "driver_specific": {} 00:14:51.724 } 00:14:51.724 ] 00:14:51.724 05:33:55 -- common/autotest_common.sh@895 -- # return 0 00:14:51.724 05:33:55 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:51.724 05:33:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:51.724 05:33:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:51.724 05:33:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:51.724 05:33:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:51.724 05:33:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:51.724 05:33:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:51.724 05:33:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:51.724 05:33:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:51.724 05:33:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:51.724 05:33:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.724 05:33:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.983 05:33:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:51.983 "name": "Existed_Raid", 00:14:51.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.983 "strip_size_kb": 64, 00:14:51.983 "state": "configuring", 00:14:51.983 "raid_level": "concat", 00:14:51.983 "superblock": false, 00:14:51.983 "num_base_bdevs": 2, 00:14:51.983 "num_base_bdevs_discovered": 1, 00:14:51.983 "num_base_bdevs_operational": 2, 00:14:51.983 "base_bdevs_list": [ 00:14:51.983 { 00:14:51.983 "name": "BaseBdev1", 00:14:51.983 "uuid": "d6cfa02b-ceb7-4507-9b6f-870e78622120", 00:14:51.983 "is_configured": true, 00:14:51.983 "data_offset": 0, 00:14:51.984 "data_size": 65536 00:14:51.984 }, 00:14:51.984 { 00:14:51.984 "name": "BaseBdev2", 00:14:51.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.984 "is_configured": false, 00:14:51.984 "data_offset": 0, 00:14:51.984 "data_size": 0 00:14:51.984 } 00:14:51.984 ] 00:14:51.984 }' 00:14:51.984 05:33:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:51.984 05:33:55 -- common/autotest_common.sh@10 -- # set +x 00:14:52.548 05:33:56 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:52.806 [2024-10-07 05:33:56.596301] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:52.806 [2024-10-07 05:33:56.596370] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:14:52.806 05:33:56 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:52.806 05:33:56 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:53.063 [2024-10-07 05:33:56.860437] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:53.063 [2024-10-07 05:33:56.862294] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:53.063 [2024-10-07 05:33:56.862377] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:53.063 05:33:56 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:53.063 05:33:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:53.063 05:33:56 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:53.063 05:33:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:53.063 05:33:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:53.063 05:33:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:53.063 05:33:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:53.064 05:33:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:53.064 05:33:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:53.064 05:33:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:53.064 05:33:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:53.064 05:33:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:53.064 05:33:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.064 05:33:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.321 05:33:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:53.321 "name": "Existed_Raid", 00:14:53.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.321 "strip_size_kb": 64, 00:14:53.321 "state": "configuring", 00:14:53.321 "raid_level": "concat", 00:14:53.321 "superblock": false, 00:14:53.321 "num_base_bdevs": 2, 00:14:53.321 "num_base_bdevs_discovered": 1, 00:14:53.321 "num_base_bdevs_operational": 2, 00:14:53.321 "base_bdevs_list": [ 00:14:53.321 { 00:14:53.321 "name": "BaseBdev1", 00:14:53.321 "uuid": "d6cfa02b-ceb7-4507-9b6f-870e78622120", 00:14:53.321 "is_configured": true, 00:14:53.321 "data_offset": 0, 00:14:53.321 "data_size": 65536 00:14:53.321 }, 00:14:53.321 { 00:14:53.321 "name": "BaseBdev2", 00:14:53.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.321 "is_configured": false, 00:14:53.321 "data_offset": 0, 00:14:53.321 "data_size": 0 00:14:53.321 } 00:14:53.321 ] 00:14:53.321 }' 00:14:53.321 05:33:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:53.321 05:33:57 -- common/autotest_common.sh@10 -- # set +x 00:14:53.886 05:33:57 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:54.451 [2024-10-07 05:33:58.124311] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:54.451 [2024-10-07 05:33:58.124358] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:14:54.451 [2024-10-07 05:33:58.124370] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:54.451 [2024-10-07 05:33:58.124481] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:14:54.451 [2024-10-07 05:33:58.124894] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:14:54.451 [2024-10-07 05:33:58.124923] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:14:54.451 [2024-10-07 05:33:58.125212] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.451 BaseBdev2 00:14:54.451 05:33:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:54.451 05:33:58 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:54.451 05:33:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:54.451 05:33:58 -- common/autotest_common.sh@889 -- # local i 00:14:54.451 05:33:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:54.451 05:33:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:54.451 05:33:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:54.451 05:33:58 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:54.709 [ 00:14:54.709 { 00:14:54.709 "name": "BaseBdev2", 00:14:54.709 "aliases": [ 00:14:54.709 "bf77ae7f-eb10-4efd-a98d-318c031b9466" 00:14:54.709 ], 00:14:54.709 "product_name": "Malloc disk", 00:14:54.709 "block_size": 512, 00:14:54.709 "num_blocks": 65536, 00:14:54.709 "uuid": "bf77ae7f-eb10-4efd-a98d-318c031b9466", 00:14:54.709 "assigned_rate_limits": { 00:14:54.709 "rw_ios_per_sec": 0, 00:14:54.709 "rw_mbytes_per_sec": 0, 00:14:54.709 "r_mbytes_per_sec": 0, 00:14:54.709 "w_mbytes_per_sec": 0 00:14:54.709 }, 00:14:54.709 "claimed": true, 00:14:54.709 "claim_type": "exclusive_write", 00:14:54.709 "zoned": false, 00:14:54.709 "supported_io_types": { 00:14:54.709 "read": true, 00:14:54.709 "write": true, 00:14:54.709 "unmap": true, 00:14:54.709 "write_zeroes": true, 00:14:54.709 "flush": true, 00:14:54.709 "reset": true, 00:14:54.709 "compare": false, 00:14:54.709 "compare_and_write": false, 00:14:54.709 "abort": true, 00:14:54.709 "nvme_admin": false, 00:14:54.709 "nvme_io": false 00:14:54.709 }, 00:14:54.709 "memory_domains": [ 00:14:54.709 { 00:14:54.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.709 "dma_device_type": 2 00:14:54.709 } 00:14:54.709 ], 00:14:54.709 "driver_specific": {} 00:14:54.709 } 00:14:54.709 ] 00:14:54.709 05:33:58 -- common/autotest_common.sh@895 -- # return 0 00:14:54.709 05:33:58 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:54.709 05:33:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:54.709 05:33:58 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:54.709 05:33:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:54.709 05:33:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:54.709 05:33:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:54.709 05:33:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:54.709 05:33:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:54.709 05:33:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:54.709 05:33:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:54.709 05:33:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:54.709 05:33:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:54.709 05:33:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.709 05:33:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.966 05:33:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:54.966 "name": "Existed_Raid", 00:14:54.966 "uuid": "d675968a-6953-4023-9093-74d7bfca4582", 00:14:54.966 "strip_size_kb": 64, 00:14:54.966 "state": "online", 00:14:54.966 "raid_level": "concat", 00:14:54.966 "superblock": false, 00:14:54.966 "num_base_bdevs": 2, 00:14:54.966 "num_base_bdevs_discovered": 2, 00:14:54.966 "num_base_bdevs_operational": 2, 00:14:54.966 "base_bdevs_list": [ 00:14:54.966 { 00:14:54.966 "name": "BaseBdev1", 00:14:54.966 "uuid": "d6cfa02b-ceb7-4507-9b6f-870e78622120", 00:14:54.966 "is_configured": true, 00:14:54.966 "data_offset": 0, 00:14:54.966 "data_size": 65536 00:14:54.966 }, 00:14:54.966 { 00:14:54.966 "name": "BaseBdev2", 00:14:54.966 "uuid": "bf77ae7f-eb10-4efd-a98d-318c031b9466", 00:14:54.966 "is_configured": true, 00:14:54.966 "data_offset": 0, 00:14:54.966 "data_size": 65536 00:14:54.966 } 00:14:54.966 ] 00:14:54.966 }' 00:14:54.966 05:33:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:54.966 05:33:58 -- common/autotest_common.sh@10 -- # set +x 00:14:55.529 05:33:59 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:55.787 [2024-10-07 05:33:59.660666] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:55.787 [2024-10-07 05:33:59.660696] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:55.787 [2024-10-07 05:33:59.660755] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:55.787 05:33:59 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:55.787 05:33:59 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:14:55.787 05:33:59 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:55.787 05:33:59 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:55.787 05:33:59 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:55.787 05:33:59 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:55.787 05:33:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:55.787 05:33:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:55.787 05:33:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:55.787 05:33:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:55.787 05:33:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:55.787 05:33:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:55.787 05:33:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:55.787 05:33:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:55.787 05:33:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:55.787 05:33:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.787 05:33:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.045 05:33:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:56.045 "name": "Existed_Raid", 00:14:56.045 "uuid": "d675968a-6953-4023-9093-74d7bfca4582", 00:14:56.045 "strip_size_kb": 64, 00:14:56.045 "state": "offline", 00:14:56.045 "raid_level": "concat", 00:14:56.045 "superblock": false, 00:14:56.045 "num_base_bdevs": 2, 00:14:56.045 "num_base_bdevs_discovered": 1, 00:14:56.045 "num_base_bdevs_operational": 1, 00:14:56.045 "base_bdevs_list": [ 00:14:56.045 { 00:14:56.045 "name": null, 00:14:56.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.045 "is_configured": false, 00:14:56.045 "data_offset": 0, 00:14:56.045 "data_size": 65536 00:14:56.045 }, 00:14:56.045 { 00:14:56.045 "name": "BaseBdev2", 00:14:56.045 "uuid": "bf77ae7f-eb10-4efd-a98d-318c031b9466", 00:14:56.045 "is_configured": true, 00:14:56.045 "data_offset": 0, 00:14:56.045 "data_size": 65536 00:14:56.045 } 00:14:56.045 ] 00:14:56.045 }' 00:14:56.045 05:33:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:56.045 05:33:59 -- common/autotest_common.sh@10 -- # set +x 00:14:56.611 05:34:00 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:56.611 05:34:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:56.611 05:34:00 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:56.611 05:34:00 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.870 05:34:00 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:56.870 05:34:00 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:56.870 05:34:00 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:57.128 [2024-10-07 05:34:00.953735] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:57.129 [2024-10-07 05:34:00.953838] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:14:57.129 05:34:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:57.129 05:34:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:57.129 05:34:01 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.129 05:34:01 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:57.387 05:34:01 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:57.387 05:34:01 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:57.387 05:34:01 -- bdev/bdev_raid.sh@287 -- # killprocess 137020 00:14:57.387 05:34:01 -- common/autotest_common.sh@926 -- # '[' -z 137020 ']' 00:14:57.387 05:34:01 -- common/autotest_common.sh@930 -- # kill -0 137020 00:14:57.387 05:34:01 -- common/autotest_common.sh@931 -- # uname 00:14:57.387 05:34:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:57.387 05:34:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 137020 00:14:57.387 05:34:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:57.387 05:34:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:57.387 05:34:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 137020' 00:14:57.387 killing process with pid 137020 00:14:57.387 05:34:01 -- common/autotest_common.sh@945 -- # kill 137020 00:14:57.387 05:34:01 -- common/autotest_common.sh@950 -- # wait 137020 00:14:57.387 [2024-10-07 05:34:01.361680] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:57.387 [2024-10-07 05:34:01.361860] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:58.761 ************************************ 00:14:58.761 END TEST raid_state_function_test 00:14:58.761 ************************************ 00:14:58.761 05:34:02 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:58.761 00:14:58.761 real 0m9.900s 00:14:58.761 user 0m17.077s 00:14:58.761 sys 0m1.316s 00:14:58.761 05:34:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:58.761 05:34:02 -- common/autotest_common.sh@10 -- # set +x 00:14:58.761 05:34:02 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:14:58.761 05:34:02 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:58.761 05:34:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:58.761 05:34:02 -- common/autotest_common.sh@10 -- # set +x 00:14:58.761 ************************************ 00:14:58.761 START TEST raid_state_function_test_sb 00:14:58.761 ************************************ 00:14:58.761 05:34:02 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 true 00:14:58.761 05:34:02 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:14:58.761 05:34:02 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:58.761 05:34:02 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:58.761 05:34:02 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:58.761 05:34:02 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:58.761 05:34:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:58.761 05:34:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:58.761 05:34:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:58.761 05:34:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:58.761 05:34:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:58.761 05:34:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:58.761 05:34:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:58.761 05:34:02 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:58.761 05:34:02 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:58.761 05:34:02 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:58.761 05:34:02 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:58.761 05:34:02 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:58.761 05:34:02 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:58.761 05:34:02 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:14:58.761 05:34:02 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:58.761 05:34:02 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:58.761 05:34:02 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:58.761 05:34:02 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:58.761 05:34:02 -- bdev/bdev_raid.sh@226 -- # raid_pid=137651 00:14:58.761 Process raid pid: 137651 00:14:58.761 05:34:02 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:58.761 05:34:02 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 137651' 00:14:58.761 05:34:02 -- bdev/bdev_raid.sh@228 -- # waitforlisten 137651 /var/tmp/spdk-raid.sock 00:14:58.761 05:34:02 -- common/autotest_common.sh@819 -- # '[' -z 137651 ']' 00:14:58.761 05:34:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:58.761 05:34:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:58.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:58.761 05:34:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:58.761 05:34:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:58.761 05:34:02 -- common/autotest_common.sh@10 -- # set +x 00:14:58.762 [2024-10-07 05:34:02.523433] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:58.762 [2024-10-07 05:34:02.523659] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.762 [2024-10-07 05:34:02.694366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.020 [2024-10-07 05:34:02.890269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.277 [2024-10-07 05:34:03.083419] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.534 05:34:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:59.534 05:34:03 -- common/autotest_common.sh@852 -- # return 0 00:14:59.534 05:34:03 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:59.792 [2024-10-07 05:34:03.589611] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:59.792 [2024-10-07 05:34:03.589714] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:59.792 [2024-10-07 05:34:03.589743] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:59.792 [2024-10-07 05:34:03.589763] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:59.792 05:34:03 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:59.792 05:34:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:59.792 05:34:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:59.792 05:34:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:59.792 05:34:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:59.792 05:34:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:59.792 05:34:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:59.792 05:34:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:59.792 05:34:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:59.792 05:34:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:59.792 05:34:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.792 05:34:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.051 05:34:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:00.051 "name": "Existed_Raid", 00:15:00.051 "uuid": "b882a81c-7626-4979-9d27-139874f98241", 00:15:00.051 "strip_size_kb": 64, 00:15:00.051 "state": "configuring", 00:15:00.051 "raid_level": "concat", 00:15:00.051 "superblock": true, 00:15:00.051 "num_base_bdevs": 2, 00:15:00.051 "num_base_bdevs_discovered": 0, 00:15:00.051 "num_base_bdevs_operational": 2, 00:15:00.051 "base_bdevs_list": [ 00:15:00.051 { 00:15:00.051 "name": "BaseBdev1", 00:15:00.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.051 "is_configured": false, 00:15:00.051 "data_offset": 0, 00:15:00.051 "data_size": 0 00:15:00.051 }, 00:15:00.051 { 00:15:00.051 "name": "BaseBdev2", 00:15:00.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.051 "is_configured": false, 00:15:00.051 "data_offset": 0, 00:15:00.051 "data_size": 0 00:15:00.051 } 00:15:00.051 ] 00:15:00.051 }' 00:15:00.051 05:34:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:00.051 05:34:03 -- common/autotest_common.sh@10 -- # set +x 00:15:00.617 05:34:04 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:00.876 [2024-10-07 05:34:04.785851] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:00.876 [2024-10-07 05:34:04.785915] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:15:00.876 05:34:04 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:01.134 [2024-10-07 05:34:04.981952] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:01.134 [2024-10-07 05:34:04.982031] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:01.134 [2024-10-07 05:34:04.982043] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:01.134 [2024-10-07 05:34:04.982067] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:01.134 05:34:04 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:01.393 [2024-10-07 05:34:05.284839] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:01.393 BaseBdev1 00:15:01.393 05:34:05 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:01.393 05:34:05 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:01.393 05:34:05 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:01.393 05:34:05 -- common/autotest_common.sh@889 -- # local i 00:15:01.393 05:34:05 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:01.393 05:34:05 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:01.393 05:34:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:01.652 05:34:05 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:01.911 [ 00:15:01.911 { 00:15:01.911 "name": "BaseBdev1", 00:15:01.911 "aliases": [ 00:15:01.911 "85bd3aef-f60e-4a86-9bda-da1312696c21" 00:15:01.911 ], 00:15:01.911 "product_name": "Malloc disk", 00:15:01.911 "block_size": 512, 00:15:01.911 "num_blocks": 65536, 00:15:01.911 "uuid": "85bd3aef-f60e-4a86-9bda-da1312696c21", 00:15:01.911 "assigned_rate_limits": { 00:15:01.911 "rw_ios_per_sec": 0, 00:15:01.911 "rw_mbytes_per_sec": 0, 00:15:01.911 "r_mbytes_per_sec": 0, 00:15:01.911 "w_mbytes_per_sec": 0 00:15:01.911 }, 00:15:01.912 "claimed": true, 00:15:01.912 "claim_type": "exclusive_write", 00:15:01.912 "zoned": false, 00:15:01.912 "supported_io_types": { 00:15:01.912 "read": true, 00:15:01.912 "write": true, 00:15:01.912 "unmap": true, 00:15:01.912 "write_zeroes": true, 00:15:01.912 "flush": true, 00:15:01.912 "reset": true, 00:15:01.912 "compare": false, 00:15:01.912 "compare_and_write": false, 00:15:01.912 "abort": true, 00:15:01.912 "nvme_admin": false, 00:15:01.912 "nvme_io": false 00:15:01.912 }, 00:15:01.912 "memory_domains": [ 00:15:01.912 { 00:15:01.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.912 "dma_device_type": 2 00:15:01.912 } 00:15:01.912 ], 00:15:01.912 "driver_specific": {} 00:15:01.912 } 00:15:01.912 ] 00:15:01.912 05:34:05 -- common/autotest_common.sh@895 -- # return 0 00:15:01.912 05:34:05 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:01.912 05:34:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:01.912 05:34:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:01.912 05:34:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:01.912 05:34:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:01.912 05:34:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:01.912 05:34:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:01.912 05:34:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:01.912 05:34:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:01.912 05:34:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:01.912 05:34:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.912 05:34:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.170 05:34:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:02.170 "name": "Existed_Raid", 00:15:02.170 "uuid": "f51f5837-9a36-4ebb-ba01-f100b3111df5", 00:15:02.170 "strip_size_kb": 64, 00:15:02.170 "state": "configuring", 00:15:02.170 "raid_level": "concat", 00:15:02.170 "superblock": true, 00:15:02.170 "num_base_bdevs": 2, 00:15:02.170 "num_base_bdevs_discovered": 1, 00:15:02.170 "num_base_bdevs_operational": 2, 00:15:02.170 "base_bdevs_list": [ 00:15:02.170 { 00:15:02.170 "name": "BaseBdev1", 00:15:02.170 "uuid": "85bd3aef-f60e-4a86-9bda-da1312696c21", 00:15:02.170 "is_configured": true, 00:15:02.170 "data_offset": 2048, 00:15:02.170 "data_size": 63488 00:15:02.170 }, 00:15:02.170 { 00:15:02.170 "name": "BaseBdev2", 00:15:02.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.170 "is_configured": false, 00:15:02.170 "data_offset": 0, 00:15:02.170 "data_size": 0 00:15:02.170 } 00:15:02.170 ] 00:15:02.170 }' 00:15:02.170 05:34:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:02.170 05:34:05 -- common/autotest_common.sh@10 -- # set +x 00:15:02.818 05:34:06 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:03.077 [2024-10-07 05:34:06.877175] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:03.077 [2024-10-07 05:34:06.877239] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:03.077 05:34:06 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:03.077 05:34:06 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:03.335 05:34:07 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:03.593 BaseBdev1 00:15:03.593 05:34:07 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:03.593 05:34:07 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:03.593 05:34:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:03.593 05:34:07 -- common/autotest_common.sh@889 -- # local i 00:15:03.593 05:34:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:03.593 05:34:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:03.593 05:34:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:03.851 05:34:07 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:04.110 [ 00:15:04.110 { 00:15:04.110 "name": "BaseBdev1", 00:15:04.110 "aliases": [ 00:15:04.110 "b39b4450-1581-45a8-ba2f-b82e8a6f568a" 00:15:04.110 ], 00:15:04.110 "product_name": "Malloc disk", 00:15:04.110 "block_size": 512, 00:15:04.110 "num_blocks": 65536, 00:15:04.110 "uuid": "b39b4450-1581-45a8-ba2f-b82e8a6f568a", 00:15:04.110 "assigned_rate_limits": { 00:15:04.110 "rw_ios_per_sec": 0, 00:15:04.110 "rw_mbytes_per_sec": 0, 00:15:04.110 "r_mbytes_per_sec": 0, 00:15:04.110 "w_mbytes_per_sec": 0 00:15:04.110 }, 00:15:04.110 "claimed": false, 00:15:04.110 "zoned": false, 00:15:04.110 "supported_io_types": { 00:15:04.110 "read": true, 00:15:04.110 "write": true, 00:15:04.110 "unmap": true, 00:15:04.110 "write_zeroes": true, 00:15:04.110 "flush": true, 00:15:04.110 "reset": true, 00:15:04.110 "compare": false, 00:15:04.110 "compare_and_write": false, 00:15:04.110 "abort": true, 00:15:04.110 "nvme_admin": false, 00:15:04.110 "nvme_io": false 00:15:04.110 }, 00:15:04.110 "memory_domains": [ 00:15:04.110 { 00:15:04.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.110 "dma_device_type": 2 00:15:04.110 } 00:15:04.110 ], 00:15:04.110 "driver_specific": {} 00:15:04.110 } 00:15:04.110 ] 00:15:04.110 05:34:07 -- common/autotest_common.sh@895 -- # return 0 00:15:04.110 05:34:07 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:04.368 [2024-10-07 05:34:08.207142] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:04.368 [2024-10-07 05:34:08.209022] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:04.368 [2024-10-07 05:34:08.209083] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:04.368 05:34:08 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:04.368 05:34:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:04.368 05:34:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:04.368 05:34:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:04.368 05:34:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:04.368 05:34:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:04.368 05:34:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:04.368 05:34:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:04.368 05:34:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:04.368 05:34:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:04.368 05:34:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:04.368 05:34:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:04.368 05:34:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.368 05:34:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.626 05:34:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:04.626 "name": "Existed_Raid", 00:15:04.626 "uuid": "c4fd0924-2a31-4d6a-838b-ed493d9dd007", 00:15:04.626 "strip_size_kb": 64, 00:15:04.626 "state": "configuring", 00:15:04.626 "raid_level": "concat", 00:15:04.626 "superblock": true, 00:15:04.626 "num_base_bdevs": 2, 00:15:04.626 "num_base_bdevs_discovered": 1, 00:15:04.626 "num_base_bdevs_operational": 2, 00:15:04.626 "base_bdevs_list": [ 00:15:04.626 { 00:15:04.626 "name": "BaseBdev1", 00:15:04.626 "uuid": "b39b4450-1581-45a8-ba2f-b82e8a6f568a", 00:15:04.626 "is_configured": true, 00:15:04.626 "data_offset": 2048, 00:15:04.626 "data_size": 63488 00:15:04.626 }, 00:15:04.626 { 00:15:04.626 "name": "BaseBdev2", 00:15:04.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.626 "is_configured": false, 00:15:04.626 "data_offset": 0, 00:15:04.626 "data_size": 0 00:15:04.626 } 00:15:04.626 ] 00:15:04.626 }' 00:15:04.626 05:34:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:04.626 05:34:08 -- common/autotest_common.sh@10 -- # set +x 00:15:05.192 05:34:09 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:05.451 [2024-10-07 05:34:09.261319] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:05.451 [2024-10-07 05:34:09.261529] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:15:05.451 [2024-10-07 05:34:09.261544] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:05.451 [2024-10-07 05:34:09.261689] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:05.451 [2024-10-07 05:34:09.262066] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:15:05.451 [2024-10-07 05:34:09.262087] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:15:05.451 BaseBdev2 00:15:05.451 [2024-10-07 05:34:09.262221] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.451 05:34:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:05.452 05:34:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:05.452 05:34:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:05.452 05:34:09 -- common/autotest_common.sh@889 -- # local i 00:15:05.452 05:34:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:05.452 05:34:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:05.452 05:34:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:05.710 05:34:09 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:05.968 [ 00:15:05.968 { 00:15:05.968 "name": "BaseBdev2", 00:15:05.968 "aliases": [ 00:15:05.968 "79d24524-c33d-40a1-893b-0cad2eed9a99" 00:15:05.968 ], 00:15:05.968 "product_name": "Malloc disk", 00:15:05.968 "block_size": 512, 00:15:05.968 "num_blocks": 65536, 00:15:05.968 "uuid": "79d24524-c33d-40a1-893b-0cad2eed9a99", 00:15:05.968 "assigned_rate_limits": { 00:15:05.968 "rw_ios_per_sec": 0, 00:15:05.968 "rw_mbytes_per_sec": 0, 00:15:05.968 "r_mbytes_per_sec": 0, 00:15:05.968 "w_mbytes_per_sec": 0 00:15:05.968 }, 00:15:05.968 "claimed": true, 00:15:05.968 "claim_type": "exclusive_write", 00:15:05.968 "zoned": false, 00:15:05.968 "supported_io_types": { 00:15:05.968 "read": true, 00:15:05.968 "write": true, 00:15:05.968 "unmap": true, 00:15:05.968 "write_zeroes": true, 00:15:05.968 "flush": true, 00:15:05.968 "reset": true, 00:15:05.968 "compare": false, 00:15:05.968 "compare_and_write": false, 00:15:05.968 "abort": true, 00:15:05.968 "nvme_admin": false, 00:15:05.968 "nvme_io": false 00:15:05.968 }, 00:15:05.968 "memory_domains": [ 00:15:05.968 { 00:15:05.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.968 "dma_device_type": 2 00:15:05.968 } 00:15:05.968 ], 00:15:05.968 "driver_specific": {} 00:15:05.968 } 00:15:05.968 ] 00:15:05.968 05:34:09 -- common/autotest_common.sh@895 -- # return 0 00:15:05.968 05:34:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:05.968 05:34:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:05.968 05:34:09 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:05.968 05:34:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:05.968 05:34:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:05.968 05:34:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:05.968 05:34:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:05.968 05:34:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:05.968 05:34:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:05.968 05:34:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:05.968 05:34:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:05.968 05:34:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:05.968 05:34:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.968 05:34:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.227 05:34:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:06.227 "name": "Existed_Raid", 00:15:06.227 "uuid": "c4fd0924-2a31-4d6a-838b-ed493d9dd007", 00:15:06.227 "strip_size_kb": 64, 00:15:06.227 "state": "online", 00:15:06.227 "raid_level": "concat", 00:15:06.227 "superblock": true, 00:15:06.227 "num_base_bdevs": 2, 00:15:06.227 "num_base_bdevs_discovered": 2, 00:15:06.227 "num_base_bdevs_operational": 2, 00:15:06.227 "base_bdevs_list": [ 00:15:06.227 { 00:15:06.227 "name": "BaseBdev1", 00:15:06.227 "uuid": "b39b4450-1581-45a8-ba2f-b82e8a6f568a", 00:15:06.227 "is_configured": true, 00:15:06.227 "data_offset": 2048, 00:15:06.227 "data_size": 63488 00:15:06.227 }, 00:15:06.227 { 00:15:06.227 "name": "BaseBdev2", 00:15:06.227 "uuid": "79d24524-c33d-40a1-893b-0cad2eed9a99", 00:15:06.227 "is_configured": true, 00:15:06.227 "data_offset": 2048, 00:15:06.227 "data_size": 63488 00:15:06.227 } 00:15:06.227 ] 00:15:06.227 }' 00:15:06.227 05:34:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:06.227 05:34:09 -- common/autotest_common.sh@10 -- # set +x 00:15:06.795 05:34:10 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:06.795 [2024-10-07 05:34:10.769742] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:06.795 [2024-10-07 05:34:10.769771] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:06.795 [2024-10-07 05:34:10.769834] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.054 05:34:10 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:07.054 05:34:10 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:15:07.054 05:34:10 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:07.054 05:34:10 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:07.054 05:34:10 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:07.054 05:34:10 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:07.054 05:34:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:07.054 05:34:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:07.054 05:34:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:07.054 05:34:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:07.054 05:34:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:07.054 05:34:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:07.054 05:34:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:07.054 05:34:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:07.054 05:34:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:07.054 05:34:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.054 05:34:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.313 05:34:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:07.313 "name": "Existed_Raid", 00:15:07.313 "uuid": "c4fd0924-2a31-4d6a-838b-ed493d9dd007", 00:15:07.313 "strip_size_kb": 64, 00:15:07.313 "state": "offline", 00:15:07.313 "raid_level": "concat", 00:15:07.313 "superblock": true, 00:15:07.313 "num_base_bdevs": 2, 00:15:07.313 "num_base_bdevs_discovered": 1, 00:15:07.313 "num_base_bdevs_operational": 1, 00:15:07.313 "base_bdevs_list": [ 00:15:07.313 { 00:15:07.313 "name": null, 00:15:07.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.313 "is_configured": false, 00:15:07.313 "data_offset": 2048, 00:15:07.313 "data_size": 63488 00:15:07.313 }, 00:15:07.313 { 00:15:07.313 "name": "BaseBdev2", 00:15:07.313 "uuid": "79d24524-c33d-40a1-893b-0cad2eed9a99", 00:15:07.313 "is_configured": true, 00:15:07.313 "data_offset": 2048, 00:15:07.313 "data_size": 63488 00:15:07.313 } 00:15:07.313 ] 00:15:07.313 }' 00:15:07.313 05:34:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:07.313 05:34:11 -- common/autotest_common.sh@10 -- # set +x 00:15:07.879 05:34:11 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:07.879 05:34:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:07.879 05:34:11 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.879 05:34:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:08.137 05:34:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:08.137 05:34:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:08.137 05:34:11 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:08.396 [2024-10-07 05:34:12.129691] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:08.396 [2024-10-07 05:34:12.129770] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:15:08.396 05:34:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:08.396 05:34:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:08.396 05:34:12 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.396 05:34:12 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:08.655 05:34:12 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:08.655 05:34:12 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:08.655 05:34:12 -- bdev/bdev_raid.sh@287 -- # killprocess 137651 00:15:08.655 05:34:12 -- common/autotest_common.sh@926 -- # '[' -z 137651 ']' 00:15:08.655 05:34:12 -- common/autotest_common.sh@930 -- # kill -0 137651 00:15:08.655 05:34:12 -- common/autotest_common.sh@931 -- # uname 00:15:08.655 05:34:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:08.655 05:34:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 137651 00:15:08.655 05:34:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:08.655 05:34:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:08.655 killing process with pid 137651 00:15:08.655 05:34:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 137651' 00:15:08.655 05:34:12 -- common/autotest_common.sh@945 -- # kill 137651 00:15:08.655 [2024-10-07 05:34:12.432125] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:08.655 05:34:12 -- common/autotest_common.sh@950 -- # wait 137651 00:15:08.655 [2024-10-07 05:34:12.432259] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:09.593 05:34:13 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:09.593 00:15:09.593 real 0m11.028s 00:15:09.593 user 0m19.104s 00:15:09.593 sys 0m1.375s 00:15:09.593 ************************************ 00:15:09.593 END TEST raid_state_function_test_sb 00:15:09.593 ************************************ 00:15:09.593 05:34:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:09.594 05:34:13 -- common/autotest_common.sh@10 -- # set +x 00:15:09.594 05:34:13 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:15:09.594 05:34:13 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:09.594 05:34:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:09.594 05:34:13 -- common/autotest_common.sh@10 -- # set +x 00:15:09.594 ************************************ 00:15:09.594 START TEST raid_superblock_test 00:15:09.594 ************************************ 00:15:09.594 05:34:13 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 2 00:15:09.594 05:34:13 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:15:09.594 05:34:13 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:15:09.594 05:34:13 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:09.594 05:34:13 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:09.594 05:34:13 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:09.594 05:34:13 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:09.594 05:34:13 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:09.594 05:34:13 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:09.594 05:34:13 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:09.594 05:34:13 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:09.594 05:34:13 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:09.594 05:34:13 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:09.594 05:34:13 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:09.594 05:34:13 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:15:09.594 05:34:13 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:09.594 05:34:13 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:09.594 05:34:13 -- bdev/bdev_raid.sh@357 -- # raid_pid=138451 00:15:09.594 05:34:13 -- bdev/bdev_raid.sh@358 -- # waitforlisten 138451 /var/tmp/spdk-raid.sock 00:15:09.594 05:34:13 -- common/autotest_common.sh@819 -- # '[' -z 138451 ']' 00:15:09.594 05:34:13 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:09.594 05:34:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:09.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:09.594 05:34:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:09.594 05:34:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:09.594 05:34:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:09.594 05:34:13 -- common/autotest_common.sh@10 -- # set +x 00:15:09.854 [2024-10-07 05:34:13.599176] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:09.854 [2024-10-07 05:34:13.599980] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138451 ] 00:15:09.854 [2024-10-07 05:34:13.768087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.113 [2024-10-07 05:34:13.966547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.372 [2024-10-07 05:34:14.157087] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.630 05:34:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:10.630 05:34:14 -- common/autotest_common.sh@852 -- # return 0 00:15:10.630 05:34:14 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:10.630 05:34:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:10.630 05:34:14 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:10.630 05:34:14 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:10.630 05:34:14 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:10.630 05:34:14 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:10.630 05:34:14 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:10.630 05:34:14 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:10.630 05:34:14 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:11.197 malloc1 00:15:11.197 05:34:14 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:11.197 [2024-10-07 05:34:15.065213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:11.197 [2024-10-07 05:34:15.065311] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.197 [2024-10-07 05:34:15.065349] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:15:11.197 [2024-10-07 05:34:15.065398] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.197 [2024-10-07 05:34:15.067826] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.197 [2024-10-07 05:34:15.067877] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:11.197 pt1 00:15:11.197 05:34:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:11.197 05:34:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:11.197 05:34:15 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:11.197 05:34:15 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:11.197 05:34:15 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:11.197 05:34:15 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:11.197 05:34:15 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:11.197 05:34:15 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:11.197 05:34:15 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:11.456 malloc2 00:15:11.456 05:34:15 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:11.715 [2024-10-07 05:34:15.562828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:11.715 [2024-10-07 05:34:15.562946] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.715 [2024-10-07 05:34:15.562996] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:11.715 [2024-10-07 05:34:15.563050] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.715 [2024-10-07 05:34:15.565287] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.715 [2024-10-07 05:34:15.565338] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:11.715 pt2 00:15:11.715 05:34:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:11.715 05:34:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:11.715 05:34:15 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:15:11.974 [2024-10-07 05:34:15.806926] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:11.974 [2024-10-07 05:34:15.809128] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:11.974 [2024-10-07 05:34:15.809321] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80 00:15:11.974 [2024-10-07 05:34:15.809336] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:11.974 [2024-10-07 05:34:15.809489] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:15:11.974 [2024-10-07 05:34:15.809911] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80 00:15:11.974 [2024-10-07 05:34:15.809949] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80 00:15:11.974 [2024-10-07 05:34:15.810114] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.974 05:34:15 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:11.974 05:34:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:11.974 05:34:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:11.974 05:34:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:11.974 05:34:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:11.974 05:34:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:11.974 05:34:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:11.974 05:34:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:11.974 05:34:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:11.974 05:34:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:11.975 05:34:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.975 05:34:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.233 05:34:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:12.233 "name": "raid_bdev1", 00:15:12.233 "uuid": "c10570f9-da4d-4e94-a366-76d0e22f0ec1", 00:15:12.233 "strip_size_kb": 64, 00:15:12.233 "state": "online", 00:15:12.233 "raid_level": "concat", 00:15:12.233 "superblock": true, 00:15:12.233 "num_base_bdevs": 2, 00:15:12.233 "num_base_bdevs_discovered": 2, 00:15:12.233 "num_base_bdevs_operational": 2, 00:15:12.233 "base_bdevs_list": [ 00:15:12.233 { 00:15:12.233 "name": "pt1", 00:15:12.233 "uuid": "fda0ba3a-eff6-5e64-93b0-7b2f465db468", 00:15:12.233 "is_configured": true, 00:15:12.233 "data_offset": 2048, 00:15:12.233 "data_size": 63488 00:15:12.233 }, 00:15:12.233 { 00:15:12.233 "name": "pt2", 00:15:12.233 "uuid": "f65bffd5-eec9-5b3a-9210-5c61c3fdefa3", 00:15:12.233 "is_configured": true, 00:15:12.233 "data_offset": 2048, 00:15:12.233 "data_size": 63488 00:15:12.233 } 00:15:12.233 ] 00:15:12.233 }' 00:15:12.233 05:34:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:12.233 05:34:16 -- common/autotest_common.sh@10 -- # set +x 00:15:12.801 05:34:16 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:12.801 05:34:16 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:13.061 [2024-10-07 05:34:16.827274] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:13.061 05:34:16 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=c10570f9-da4d-4e94-a366-76d0e22f0ec1 00:15:13.061 05:34:16 -- bdev/bdev_raid.sh@380 -- # '[' -z c10570f9-da4d-4e94-a366-76d0e22f0ec1 ']' 00:15:13.061 05:34:16 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:13.061 [2024-10-07 05:34:17.011138] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:13.061 [2024-10-07 05:34:17.011160] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:13.061 [2024-10-07 05:34:17.011222] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:13.061 [2024-10-07 05:34:17.011268] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:13.061 [2024-10-07 05:34:17.011279] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline 00:15:13.061 05:34:17 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.061 05:34:17 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:13.319 05:34:17 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:13.319 05:34:17 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:13.319 05:34:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:13.319 05:34:17 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:13.578 05:34:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:13.578 05:34:17 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:13.837 05:34:17 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:13.837 05:34:17 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:13.837 05:34:17 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:13.837 05:34:17 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:13.837 05:34:17 -- common/autotest_common.sh@640 -- # local es=0 00:15:13.837 05:34:17 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:13.837 05:34:17 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:13.837 05:34:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:13.837 05:34:17 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:13.837 05:34:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:13.837 05:34:17 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:13.837 05:34:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:13.837 05:34:17 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:13.837 05:34:17 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:13.837 05:34:17 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:14.097 [2024-10-07 05:34:18.019340] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:14.097 [2024-10-07 05:34:18.021066] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:14.097 [2024-10-07 05:34:18.021133] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:14.097 [2024-10-07 05:34:18.021210] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:14.097 [2024-10-07 05:34:18.021242] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:14.097 [2024-10-07 05:34:18.021252] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring 00:15:14.097 request: 00:15:14.097 { 00:15:14.097 "name": "raid_bdev1", 00:15:14.097 "raid_level": "concat", 00:15:14.097 "base_bdevs": [ 00:15:14.097 "malloc1", 00:15:14.097 "malloc2" 00:15:14.097 ], 00:15:14.097 "superblock": false, 00:15:14.097 "strip_size_kb": 64, 00:15:14.097 "method": "bdev_raid_create", 00:15:14.097 "req_id": 1 00:15:14.097 } 00:15:14.097 Got JSON-RPC error response 00:15:14.097 response: 00:15:14.097 { 00:15:14.097 "code": -17, 00:15:14.097 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:14.097 } 00:15:14.097 05:34:18 -- common/autotest_common.sh@643 -- # es=1 00:15:14.097 05:34:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:14.097 05:34:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:14.097 05:34:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:14.097 05:34:18 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.097 05:34:18 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:14.356 05:34:18 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:14.356 05:34:18 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:14.356 05:34:18 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:14.614 [2024-10-07 05:34:18.491404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:14.614 [2024-10-07 05:34:18.491497] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.614 [2024-10-07 05:34:18.491538] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:14.614 [2024-10-07 05:34:18.491565] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.614 [2024-10-07 05:34:18.494063] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.614 [2024-10-07 05:34:18.494148] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:14.614 [2024-10-07 05:34:18.494241] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:14.614 [2024-10-07 05:34:18.494299] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:14.614 pt1 00:15:14.614 05:34:18 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:15:14.614 05:34:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:14.614 05:34:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:14.614 05:34:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:14.614 05:34:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:14.614 05:34:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:14.614 05:34:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:14.614 05:34:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:14.614 05:34:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:14.614 05:34:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:14.614 05:34:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.614 05:34:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.872 05:34:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:14.872 "name": "raid_bdev1", 00:15:14.872 "uuid": "c10570f9-da4d-4e94-a366-76d0e22f0ec1", 00:15:14.872 "strip_size_kb": 64, 00:15:14.872 "state": "configuring", 00:15:14.872 "raid_level": "concat", 00:15:14.872 "superblock": true, 00:15:14.872 "num_base_bdevs": 2, 00:15:14.872 "num_base_bdevs_discovered": 1, 00:15:14.872 "num_base_bdevs_operational": 2, 00:15:14.872 "base_bdevs_list": [ 00:15:14.872 { 00:15:14.872 "name": "pt1", 00:15:14.872 "uuid": "fda0ba3a-eff6-5e64-93b0-7b2f465db468", 00:15:14.872 "is_configured": true, 00:15:14.872 "data_offset": 2048, 00:15:14.872 "data_size": 63488 00:15:14.872 }, 00:15:14.872 { 00:15:14.872 "name": null, 00:15:14.872 "uuid": "f65bffd5-eec9-5b3a-9210-5c61c3fdefa3", 00:15:14.872 "is_configured": false, 00:15:14.872 "data_offset": 2048, 00:15:14.872 "data_size": 63488 00:15:14.872 } 00:15:14.872 ] 00:15:14.872 }' 00:15:14.872 05:34:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:14.872 05:34:18 -- common/autotest_common.sh@10 -- # set +x 00:15:15.546 05:34:19 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:15:15.546 05:34:19 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:15.546 05:34:19 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:15.546 05:34:19 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:15.844 [2024-10-07 05:34:19.551667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:15.844 [2024-10-07 05:34:19.551793] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.844 [2024-10-07 05:34:19.551837] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:15.844 [2024-10-07 05:34:19.551866] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.844 [2024-10-07 05:34:19.552445] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.844 [2024-10-07 05:34:19.552501] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:15.844 [2024-10-07 05:34:19.552610] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:15.844 [2024-10-07 05:34:19.552637] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:15.844 [2024-10-07 05:34:19.552766] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:15:15.844 [2024-10-07 05:34:19.552779] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:15.844 [2024-10-07 05:34:19.552903] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:15:15.844 [2024-10-07 05:34:19.553241] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:15:15.844 [2024-10-07 05:34:19.553266] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:15:15.844 [2024-10-07 05:34:19.553404] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.844 pt2 00:15:15.844 05:34:19 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:15.844 05:34:19 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:15.844 05:34:19 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:15.844 05:34:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:15.844 05:34:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:15.844 05:34:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:15.844 05:34:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:15.844 05:34:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:15.844 05:34:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:15.844 05:34:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:15.844 05:34:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:15.844 05:34:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:15.844 05:34:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.844 05:34:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.844 05:34:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:15.844 "name": "raid_bdev1", 00:15:15.844 "uuid": "c10570f9-da4d-4e94-a366-76d0e22f0ec1", 00:15:15.844 "strip_size_kb": 64, 00:15:15.844 "state": "online", 00:15:15.844 "raid_level": "concat", 00:15:15.844 "superblock": true, 00:15:15.844 "num_base_bdevs": 2, 00:15:15.844 "num_base_bdevs_discovered": 2, 00:15:15.844 "num_base_bdevs_operational": 2, 00:15:15.844 "base_bdevs_list": [ 00:15:15.844 { 00:15:15.844 "name": "pt1", 00:15:15.844 "uuid": "fda0ba3a-eff6-5e64-93b0-7b2f465db468", 00:15:15.844 "is_configured": true, 00:15:15.844 "data_offset": 2048, 00:15:15.844 "data_size": 63488 00:15:15.844 }, 00:15:15.844 { 00:15:15.844 "name": "pt2", 00:15:15.844 "uuid": "f65bffd5-eec9-5b3a-9210-5c61c3fdefa3", 00:15:15.844 "is_configured": true, 00:15:15.844 "data_offset": 2048, 00:15:15.844 "data_size": 63488 00:15:15.844 } 00:15:15.844 ] 00:15:15.844 }' 00:15:15.844 05:34:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:15.844 05:34:19 -- common/autotest_common.sh@10 -- # set +x 00:15:16.429 05:34:20 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:16.429 05:34:20 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:16.689 [2024-10-07 05:34:20.600138] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:16.689 05:34:20 -- bdev/bdev_raid.sh@430 -- # '[' c10570f9-da4d-4e94-a366-76d0e22f0ec1 '!=' c10570f9-da4d-4e94-a366-76d0e22f0ec1 ']' 00:15:16.689 05:34:20 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:15:16.689 05:34:20 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:16.689 05:34:20 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:16.689 05:34:20 -- bdev/bdev_raid.sh@511 -- # killprocess 138451 00:15:16.689 05:34:20 -- common/autotest_common.sh@926 -- # '[' -z 138451 ']' 00:15:16.689 05:34:20 -- common/autotest_common.sh@930 -- # kill -0 138451 00:15:16.689 05:34:20 -- common/autotest_common.sh@931 -- # uname 00:15:16.689 05:34:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:16.689 05:34:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 138451 00:15:16.689 05:34:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:16.689 05:34:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:16.689 05:34:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 138451' 00:15:16.689 killing process with pid 138451 00:15:16.689 05:34:20 -- common/autotest_common.sh@945 -- # kill 138451 00:15:16.689 [2024-10-07 05:34:20.642815] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:16.689 [2024-10-07 05:34:20.642932] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.689 [2024-10-07 05:34:20.642991] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:16.689 [2024-10-07 05:34:20.643001] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:15:16.689 05:34:20 -- common/autotest_common.sh@950 -- # wait 138451 00:15:16.948 [2024-10-07 05:34:20.782543] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:17.886 05:34:21 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:17.886 00:15:17.886 real 0m8.313s 00:15:17.886 user 0m14.118s 00:15:17.886 sys 0m1.026s 00:15:17.886 05:34:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:17.886 05:34:21 -- common/autotest_common.sh@10 -- # set +x 00:15:17.886 ************************************ 00:15:17.886 END TEST raid_superblock_test 00:15:17.886 ************************************ 00:15:18.146 05:34:21 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:18.146 05:34:21 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:15:18.146 05:34:21 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:18.146 05:34:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:18.146 05:34:21 -- common/autotest_common.sh@10 -- # set +x 00:15:18.146 ************************************ 00:15:18.146 START TEST raid_state_function_test 00:15:18.146 ************************************ 00:15:18.146 05:34:21 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 false 00:15:18.146 05:34:21 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:15:18.146 05:34:21 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:18.146 05:34:21 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:18.146 05:34:21 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:18.146 05:34:21 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:18.146 05:34:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:18.146 05:34:21 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:18.146 05:34:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:18.146 05:34:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:18.146 05:34:21 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:18.146 05:34:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:18.146 05:34:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:18.146 05:34:21 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:18.146 05:34:21 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:18.146 05:34:21 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:18.146 05:34:21 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:18.146 05:34:21 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:18.146 05:34:21 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:18.146 05:34:21 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:15:18.146 05:34:21 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:15:18.146 05:34:21 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:18.146 05:34:21 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:18.146 05:34:21 -- bdev/bdev_raid.sh@226 -- # raid_pid=138958 00:15:18.146 05:34:21 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:18.146 Process raid pid: 138958 00:15:18.146 05:34:21 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 138958' 00:15:18.146 05:34:21 -- bdev/bdev_raid.sh@228 -- # waitforlisten 138958 /var/tmp/spdk-raid.sock 00:15:18.146 05:34:21 -- common/autotest_common.sh@819 -- # '[' -z 138958 ']' 00:15:18.146 05:34:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:18.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:18.146 05:34:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:18.146 05:34:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:18.146 05:34:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:18.146 05:34:21 -- common/autotest_common.sh@10 -- # set +x 00:15:18.146 [2024-10-07 05:34:21.969740] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:18.146 [2024-10-07 05:34:21.969923] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.405 [2024-10-07 05:34:22.135099] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.405 [2024-10-07 05:34:22.335953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.664 [2024-10-07 05:34:22.533434] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.923 05:34:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:18.923 05:34:22 -- common/autotest_common.sh@852 -- # return 0 00:15:18.923 05:34:22 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:19.181 [2024-10-07 05:34:23.113900] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:19.181 [2024-10-07 05:34:23.113979] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:19.181 [2024-10-07 05:34:23.113992] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:19.181 [2024-10-07 05:34:23.114014] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:19.181 05:34:23 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:19.181 05:34:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:19.181 05:34:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:19.181 05:34:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:19.181 05:34:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:19.181 05:34:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:19.181 05:34:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:19.181 05:34:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:19.181 05:34:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:19.181 05:34:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:19.181 05:34:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.181 05:34:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.440 05:34:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:19.440 "name": "Existed_Raid", 00:15:19.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.440 "strip_size_kb": 0, 00:15:19.440 "state": "configuring", 00:15:19.440 "raid_level": "raid1", 00:15:19.440 "superblock": false, 00:15:19.440 "num_base_bdevs": 2, 00:15:19.440 "num_base_bdevs_discovered": 0, 00:15:19.440 "num_base_bdevs_operational": 2, 00:15:19.440 "base_bdevs_list": [ 00:15:19.440 { 00:15:19.440 "name": "BaseBdev1", 00:15:19.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.440 "is_configured": false, 00:15:19.440 "data_offset": 0, 00:15:19.440 "data_size": 0 00:15:19.440 }, 00:15:19.440 { 00:15:19.440 "name": "BaseBdev2", 00:15:19.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.440 "is_configured": false, 00:15:19.440 "data_offset": 0, 00:15:19.440 "data_size": 0 00:15:19.440 } 00:15:19.440 ] 00:15:19.440 }' 00:15:19.440 05:34:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:19.440 05:34:23 -- common/autotest_common.sh@10 -- # set +x 00:15:20.034 05:34:23 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:20.292 [2024-10-07 05:34:24.234033] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:20.292 [2024-10-07 05:34:24.234226] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:15:20.292 05:34:24 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:20.550 [2024-10-07 05:34:24.498076] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:20.550 [2024-10-07 05:34:24.498272] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:20.550 [2024-10-07 05:34:24.498384] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:20.550 [2024-10-07 05:34:24.498567] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:20.550 05:34:24 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:20.809 [2024-10-07 05:34:24.785905] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:21.067 BaseBdev1 00:15:21.068 05:34:24 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:21.068 05:34:24 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:21.068 05:34:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:21.068 05:34:24 -- common/autotest_common.sh@889 -- # local i 00:15:21.068 05:34:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:21.068 05:34:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:21.068 05:34:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:21.068 05:34:24 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:21.327 [ 00:15:21.327 { 00:15:21.327 "name": "BaseBdev1", 00:15:21.327 "aliases": [ 00:15:21.327 "f7665b16-1d32-435a-afc0-8637e51bfef7" 00:15:21.327 ], 00:15:21.327 "product_name": "Malloc disk", 00:15:21.327 "block_size": 512, 00:15:21.327 "num_blocks": 65536, 00:15:21.327 "uuid": "f7665b16-1d32-435a-afc0-8637e51bfef7", 00:15:21.327 "assigned_rate_limits": { 00:15:21.327 "rw_ios_per_sec": 0, 00:15:21.327 "rw_mbytes_per_sec": 0, 00:15:21.327 "r_mbytes_per_sec": 0, 00:15:21.327 "w_mbytes_per_sec": 0 00:15:21.327 }, 00:15:21.327 "claimed": true, 00:15:21.327 "claim_type": "exclusive_write", 00:15:21.327 "zoned": false, 00:15:21.327 "supported_io_types": { 00:15:21.327 "read": true, 00:15:21.327 "write": true, 00:15:21.327 "unmap": true, 00:15:21.327 "write_zeroes": true, 00:15:21.327 "flush": true, 00:15:21.327 "reset": true, 00:15:21.327 "compare": false, 00:15:21.327 "compare_and_write": false, 00:15:21.327 "abort": true, 00:15:21.327 "nvme_admin": false, 00:15:21.327 "nvme_io": false 00:15:21.327 }, 00:15:21.327 "memory_domains": [ 00:15:21.327 { 00:15:21.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.327 "dma_device_type": 2 00:15:21.327 } 00:15:21.327 ], 00:15:21.327 "driver_specific": {} 00:15:21.327 } 00:15:21.327 ] 00:15:21.327 05:34:25 -- common/autotest_common.sh@895 -- # return 0 00:15:21.327 05:34:25 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:21.327 05:34:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:21.327 05:34:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:21.327 05:34:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:21.327 05:34:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:21.327 05:34:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:21.327 05:34:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:21.327 05:34:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:21.327 05:34:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:21.327 05:34:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:21.327 05:34:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.327 05:34:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.586 05:34:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:21.586 "name": "Existed_Raid", 00:15:21.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.586 "strip_size_kb": 0, 00:15:21.586 "state": "configuring", 00:15:21.586 "raid_level": "raid1", 00:15:21.586 "superblock": false, 00:15:21.586 "num_base_bdevs": 2, 00:15:21.586 "num_base_bdevs_discovered": 1, 00:15:21.586 "num_base_bdevs_operational": 2, 00:15:21.586 "base_bdevs_list": [ 00:15:21.586 { 00:15:21.586 "name": "BaseBdev1", 00:15:21.586 "uuid": "f7665b16-1d32-435a-afc0-8637e51bfef7", 00:15:21.586 "is_configured": true, 00:15:21.586 "data_offset": 0, 00:15:21.586 "data_size": 65536 00:15:21.586 }, 00:15:21.586 { 00:15:21.586 "name": "BaseBdev2", 00:15:21.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.586 "is_configured": false, 00:15:21.586 "data_offset": 0, 00:15:21.586 "data_size": 0 00:15:21.586 } 00:15:21.586 ] 00:15:21.586 }' 00:15:21.586 05:34:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:21.586 05:34:25 -- common/autotest_common.sh@10 -- # set +x 00:15:22.154 05:34:26 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:22.412 [2024-10-07 05:34:26.270325] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:22.412 [2024-10-07 05:34:26.270562] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:22.412 05:34:26 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:22.412 05:34:26 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:22.671 [2024-10-07 05:34:26.474376] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:22.671 [2024-10-07 05:34:26.476512] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:22.671 [2024-10-07 05:34:26.476711] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:22.671 05:34:26 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:22.671 05:34:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:22.671 05:34:26 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:22.671 05:34:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:22.671 05:34:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:22.671 05:34:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:22.671 05:34:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:22.671 05:34:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:22.671 05:34:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:22.671 05:34:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:22.671 05:34:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:22.671 05:34:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:22.671 05:34:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.671 05:34:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.929 05:34:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:22.929 "name": "Existed_Raid", 00:15:22.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.929 "strip_size_kb": 0, 00:15:22.929 "state": "configuring", 00:15:22.929 "raid_level": "raid1", 00:15:22.929 "superblock": false, 00:15:22.929 "num_base_bdevs": 2, 00:15:22.929 "num_base_bdevs_discovered": 1, 00:15:22.929 "num_base_bdevs_operational": 2, 00:15:22.929 "base_bdevs_list": [ 00:15:22.929 { 00:15:22.929 "name": "BaseBdev1", 00:15:22.929 "uuid": "f7665b16-1d32-435a-afc0-8637e51bfef7", 00:15:22.929 "is_configured": true, 00:15:22.929 "data_offset": 0, 00:15:22.929 "data_size": 65536 00:15:22.929 }, 00:15:22.929 { 00:15:22.929 "name": "BaseBdev2", 00:15:22.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.929 "is_configured": false, 00:15:22.929 "data_offset": 0, 00:15:22.929 "data_size": 0 00:15:22.929 } 00:15:22.929 ] 00:15:22.929 }' 00:15:22.929 05:34:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:22.929 05:34:26 -- common/autotest_common.sh@10 -- # set +x 00:15:23.496 05:34:27 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:23.754 [2024-10-07 05:34:27.588767] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:23.754 [2024-10-07 05:34:27.589098] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:15:23.754 [2024-10-07 05:34:27.589144] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:23.754 [2024-10-07 05:34:27.589404] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:23.754 [2024-10-07 05:34:27.589923] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:15:23.754 [2024-10-07 05:34:27.590062] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:15:23.754 [2024-10-07 05:34:27.590558] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.754 BaseBdev2 00:15:23.754 05:34:27 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:23.754 05:34:27 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:23.754 05:34:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:23.754 05:34:27 -- common/autotest_common.sh@889 -- # local i 00:15:23.754 05:34:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:23.754 05:34:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:23.754 05:34:27 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:24.014 05:34:27 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:24.273 [ 00:15:24.273 { 00:15:24.273 "name": "BaseBdev2", 00:15:24.273 "aliases": [ 00:15:24.273 "2a24d30e-0273-4bf7-84f9-a40482e91a74" 00:15:24.273 ], 00:15:24.273 "product_name": "Malloc disk", 00:15:24.273 "block_size": 512, 00:15:24.273 "num_blocks": 65536, 00:15:24.273 "uuid": "2a24d30e-0273-4bf7-84f9-a40482e91a74", 00:15:24.273 "assigned_rate_limits": { 00:15:24.273 "rw_ios_per_sec": 0, 00:15:24.273 "rw_mbytes_per_sec": 0, 00:15:24.273 "r_mbytes_per_sec": 0, 00:15:24.273 "w_mbytes_per_sec": 0 00:15:24.273 }, 00:15:24.273 "claimed": true, 00:15:24.273 "claim_type": "exclusive_write", 00:15:24.273 "zoned": false, 00:15:24.273 "supported_io_types": { 00:15:24.273 "read": true, 00:15:24.273 "write": true, 00:15:24.273 "unmap": true, 00:15:24.273 "write_zeroes": true, 00:15:24.273 "flush": true, 00:15:24.273 "reset": true, 00:15:24.273 "compare": false, 00:15:24.273 "compare_and_write": false, 00:15:24.273 "abort": true, 00:15:24.273 "nvme_admin": false, 00:15:24.273 "nvme_io": false 00:15:24.273 }, 00:15:24.273 "memory_domains": [ 00:15:24.273 { 00:15:24.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.273 "dma_device_type": 2 00:15:24.273 } 00:15:24.273 ], 00:15:24.273 "driver_specific": {} 00:15:24.273 } 00:15:24.273 ] 00:15:24.273 05:34:28 -- common/autotest_common.sh@895 -- # return 0 00:15:24.273 05:34:28 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:24.273 05:34:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:24.273 05:34:28 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:24.273 05:34:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:24.273 05:34:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:24.273 05:34:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:24.273 05:34:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:24.273 05:34:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:24.273 05:34:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:24.273 05:34:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:24.273 05:34:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:24.273 05:34:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:24.273 05:34:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.273 05:34:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.273 05:34:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:24.273 "name": "Existed_Raid", 00:15:24.273 "uuid": "d8eb8272-5df2-4d57-958d-58a3524b3a00", 00:15:24.273 "strip_size_kb": 0, 00:15:24.273 "state": "online", 00:15:24.273 "raid_level": "raid1", 00:15:24.273 "superblock": false, 00:15:24.273 "num_base_bdevs": 2, 00:15:24.273 "num_base_bdevs_discovered": 2, 00:15:24.273 "num_base_bdevs_operational": 2, 00:15:24.273 "base_bdevs_list": [ 00:15:24.273 { 00:15:24.273 "name": "BaseBdev1", 00:15:24.273 "uuid": "f7665b16-1d32-435a-afc0-8637e51bfef7", 00:15:24.273 "is_configured": true, 00:15:24.273 "data_offset": 0, 00:15:24.273 "data_size": 65536 00:15:24.273 }, 00:15:24.273 { 00:15:24.273 "name": "BaseBdev2", 00:15:24.273 "uuid": "2a24d30e-0273-4bf7-84f9-a40482e91a74", 00:15:24.273 "is_configured": true, 00:15:24.273 "data_offset": 0, 00:15:24.273 "data_size": 65536 00:15:24.273 } 00:15:24.273 ] 00:15:24.273 }' 00:15:24.273 05:34:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:24.273 05:34:28 -- common/autotest_common.sh@10 -- # set +x 00:15:25.209 05:34:28 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:25.209 [2024-10-07 05:34:29.153206] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:25.469 05:34:29 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:25.469 05:34:29 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:15:25.469 05:34:29 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:25.469 05:34:29 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:25.469 05:34:29 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:15:25.469 05:34:29 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:25.469 05:34:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:25.469 05:34:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:25.469 05:34:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:25.469 05:34:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:25.469 05:34:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:25.469 05:34:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:25.469 05:34:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:25.469 05:34:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:25.469 05:34:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:25.469 05:34:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.469 05:34:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.727 05:34:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:25.727 "name": "Existed_Raid", 00:15:25.727 "uuid": "d8eb8272-5df2-4d57-958d-58a3524b3a00", 00:15:25.727 "strip_size_kb": 0, 00:15:25.727 "state": "online", 00:15:25.727 "raid_level": "raid1", 00:15:25.727 "superblock": false, 00:15:25.727 "num_base_bdevs": 2, 00:15:25.727 "num_base_bdevs_discovered": 1, 00:15:25.727 "num_base_bdevs_operational": 1, 00:15:25.727 "base_bdevs_list": [ 00:15:25.727 { 00:15:25.727 "name": null, 00:15:25.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.727 "is_configured": false, 00:15:25.727 "data_offset": 0, 00:15:25.727 "data_size": 65536 00:15:25.727 }, 00:15:25.727 { 00:15:25.727 "name": "BaseBdev2", 00:15:25.727 "uuid": "2a24d30e-0273-4bf7-84f9-a40482e91a74", 00:15:25.727 "is_configured": true, 00:15:25.727 "data_offset": 0, 00:15:25.727 "data_size": 65536 00:15:25.727 } 00:15:25.727 ] 00:15:25.727 }' 00:15:25.727 05:34:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:25.728 05:34:29 -- common/autotest_common.sh@10 -- # set +x 00:15:26.294 05:34:30 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:26.294 05:34:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:26.294 05:34:30 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.294 05:34:30 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:26.553 05:34:30 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:26.553 05:34:30 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:26.553 05:34:30 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:26.812 [2024-10-07 05:34:30.623322] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:26.812 [2024-10-07 05:34:30.623551] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:26.812 [2024-10-07 05:34:30.623758] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:26.812 [2024-10-07 05:34:30.693874] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:26.812 [2024-10-07 05:34:30.694177] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:15:26.812 05:34:30 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:26.812 05:34:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:26.812 05:34:30 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.812 05:34:30 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:27.072 05:34:30 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:27.072 05:34:30 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:27.072 05:34:30 -- bdev/bdev_raid.sh@287 -- # killprocess 138958 00:15:27.072 05:34:30 -- common/autotest_common.sh@926 -- # '[' -z 138958 ']' 00:15:27.072 05:34:30 -- common/autotest_common.sh@930 -- # kill -0 138958 00:15:27.072 05:34:30 -- common/autotest_common.sh@931 -- # uname 00:15:27.072 05:34:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:27.072 05:34:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 138958 00:15:27.072 05:34:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:27.072 05:34:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:27.072 05:34:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 138958' 00:15:27.072 killing process with pid 138958 00:15:27.072 05:34:30 -- common/autotest_common.sh@945 -- # kill 138958 00:15:27.072 [2024-10-07 05:34:30.994444] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:27.072 05:34:30 -- common/autotest_common.sh@950 -- # wait 138958 00:15:27.072 [2024-10-07 05:34:30.994752] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:28.449 ************************************ 00:15:28.449 END TEST raid_state_function_test 00:15:28.449 ************************************ 00:15:28.449 05:34:32 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:28.449 00:15:28.449 real 0m10.152s 00:15:28.449 user 0m17.558s 00:15:28.449 sys 0m1.222s 00:15:28.449 05:34:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:28.449 05:34:32 -- common/autotest_common.sh@10 -- # set +x 00:15:28.449 05:34:32 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:15:28.449 05:34:32 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:28.449 05:34:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:28.449 05:34:32 -- common/autotest_common.sh@10 -- # set +x 00:15:28.449 ************************************ 00:15:28.449 START TEST raid_state_function_test_sb 00:15:28.449 ************************************ 00:15:28.450 05:34:32 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 true 00:15:28.450 05:34:32 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:15:28.450 05:34:32 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:28.450 05:34:32 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:28.450 05:34:32 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:28.450 05:34:32 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:28.450 05:34:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:28.450 05:34:32 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:28.450 05:34:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:28.450 05:34:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:28.450 05:34:32 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:28.450 05:34:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:28.450 05:34:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:28.450 05:34:32 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:28.450 05:34:32 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:28.450 05:34:32 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:28.450 05:34:32 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:28.450 05:34:32 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:28.450 05:34:32 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:28.450 05:34:32 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:15:28.450 05:34:32 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:15:28.450 05:34:32 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:28.450 05:34:32 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:28.450 05:34:32 -- bdev/bdev_raid.sh@226 -- # raid_pid=139598 00:15:28.450 05:34:32 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:28.450 05:34:32 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 139598' 00:15:28.450 Process raid pid: 139598 00:15:28.450 05:34:32 -- bdev/bdev_raid.sh@228 -- # waitforlisten 139598 /var/tmp/spdk-raid.sock 00:15:28.450 05:34:32 -- common/autotest_common.sh@819 -- # '[' -z 139598 ']' 00:15:28.450 05:34:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:28.450 05:34:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:28.450 05:34:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:28.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:28.450 05:34:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:28.450 05:34:32 -- common/autotest_common.sh@10 -- # set +x 00:15:28.450 [2024-10-07 05:34:32.184583] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:28.450 [2024-10-07 05:34:32.184973] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.450 [2024-10-07 05:34:32.334943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.709 [2024-10-07 05:34:32.542195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.968 [2024-10-07 05:34:32.744375] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:29.227 05:34:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:29.227 05:34:33 -- common/autotest_common.sh@852 -- # return 0 00:15:29.227 05:34:33 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:29.486 [2024-10-07 05:34:33.319488] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:29.486 [2024-10-07 05:34:33.319734] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:29.486 [2024-10-07 05:34:33.319846] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:29.486 [2024-10-07 05:34:33.319907] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:29.486 05:34:33 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:29.486 05:34:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:29.486 05:34:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:29.486 05:34:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:29.486 05:34:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:29.486 05:34:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:29.486 05:34:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:29.486 05:34:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:29.486 05:34:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:29.486 05:34:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:29.486 05:34:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.486 05:34:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.745 05:34:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:29.745 "name": "Existed_Raid", 00:15:29.745 "uuid": "9beea78c-80ca-4aeb-a212-ad106bce4129", 00:15:29.745 "strip_size_kb": 0, 00:15:29.745 "state": "configuring", 00:15:29.745 "raid_level": "raid1", 00:15:29.745 "superblock": true, 00:15:29.745 "num_base_bdevs": 2, 00:15:29.745 "num_base_bdevs_discovered": 0, 00:15:29.745 "num_base_bdevs_operational": 2, 00:15:29.745 "base_bdevs_list": [ 00:15:29.745 { 00:15:29.745 "name": "BaseBdev1", 00:15:29.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.745 "is_configured": false, 00:15:29.745 "data_offset": 0, 00:15:29.745 "data_size": 0 00:15:29.745 }, 00:15:29.745 { 00:15:29.745 "name": "BaseBdev2", 00:15:29.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.745 "is_configured": false, 00:15:29.745 "data_offset": 0, 00:15:29.745 "data_size": 0 00:15:29.745 } 00:15:29.745 ] 00:15:29.745 }' 00:15:29.745 05:34:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:29.745 05:34:33 -- common/autotest_common.sh@10 -- # set +x 00:15:30.312 05:34:34 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:30.571 [2024-10-07 05:34:34.315532] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:30.571 [2024-10-07 05:34:34.315749] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:15:30.571 05:34:34 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:30.829 [2024-10-07 05:34:34.579599] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:30.829 [2024-10-07 05:34:34.579808] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:30.829 [2024-10-07 05:34:34.579911] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:30.829 [2024-10-07 05:34:34.580046] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:30.829 05:34:34 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:30.829 [2024-10-07 05:34:34.805709] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:31.088 BaseBdev1 00:15:31.088 05:34:34 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:31.088 05:34:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:31.088 05:34:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:31.088 05:34:34 -- common/autotest_common.sh@889 -- # local i 00:15:31.088 05:34:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:31.088 05:34:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:31.088 05:34:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:31.088 05:34:35 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:31.347 [ 00:15:31.347 { 00:15:31.347 "name": "BaseBdev1", 00:15:31.347 "aliases": [ 00:15:31.347 "fc132052-7bf1-4c0d-b882-b23a1a21144c" 00:15:31.347 ], 00:15:31.347 "product_name": "Malloc disk", 00:15:31.347 "block_size": 512, 00:15:31.347 "num_blocks": 65536, 00:15:31.347 "uuid": "fc132052-7bf1-4c0d-b882-b23a1a21144c", 00:15:31.347 "assigned_rate_limits": { 00:15:31.347 "rw_ios_per_sec": 0, 00:15:31.347 "rw_mbytes_per_sec": 0, 00:15:31.347 "r_mbytes_per_sec": 0, 00:15:31.347 "w_mbytes_per_sec": 0 00:15:31.347 }, 00:15:31.347 "claimed": true, 00:15:31.347 "claim_type": "exclusive_write", 00:15:31.347 "zoned": false, 00:15:31.347 "supported_io_types": { 00:15:31.347 "read": true, 00:15:31.347 "write": true, 00:15:31.347 "unmap": true, 00:15:31.347 "write_zeroes": true, 00:15:31.347 "flush": true, 00:15:31.347 "reset": true, 00:15:31.347 "compare": false, 00:15:31.347 "compare_and_write": false, 00:15:31.347 "abort": true, 00:15:31.347 "nvme_admin": false, 00:15:31.347 "nvme_io": false 00:15:31.347 }, 00:15:31.347 "memory_domains": [ 00:15:31.347 { 00:15:31.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.347 "dma_device_type": 2 00:15:31.347 } 00:15:31.347 ], 00:15:31.347 "driver_specific": {} 00:15:31.347 } 00:15:31.347 ] 00:15:31.347 05:34:35 -- common/autotest_common.sh@895 -- # return 0 00:15:31.347 05:34:35 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:31.347 05:34:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:31.347 05:34:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:31.347 05:34:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:31.347 05:34:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:31.347 05:34:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:31.347 05:34:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:31.347 05:34:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:31.347 05:34:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:31.347 05:34:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:31.347 05:34:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.347 05:34:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.607 05:34:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:31.607 "name": "Existed_Raid", 00:15:31.607 "uuid": "ac12c7bc-c411-42ea-867d-a0512a35d7f7", 00:15:31.607 "strip_size_kb": 0, 00:15:31.607 "state": "configuring", 00:15:31.607 "raid_level": "raid1", 00:15:31.607 "superblock": true, 00:15:31.607 "num_base_bdevs": 2, 00:15:31.607 "num_base_bdevs_discovered": 1, 00:15:31.607 "num_base_bdevs_operational": 2, 00:15:31.607 "base_bdevs_list": [ 00:15:31.607 { 00:15:31.607 "name": "BaseBdev1", 00:15:31.607 "uuid": "fc132052-7bf1-4c0d-b882-b23a1a21144c", 00:15:31.607 "is_configured": true, 00:15:31.607 "data_offset": 2048, 00:15:31.607 "data_size": 63488 00:15:31.607 }, 00:15:31.607 { 00:15:31.607 "name": "BaseBdev2", 00:15:31.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.607 "is_configured": false, 00:15:31.607 "data_offset": 0, 00:15:31.607 "data_size": 0 00:15:31.607 } 00:15:31.607 ] 00:15:31.607 }' 00:15:31.607 05:34:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:31.607 05:34:35 -- common/autotest_common.sh@10 -- # set +x 00:15:32.173 05:34:36 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:32.431 [2024-10-07 05:34:36.402157] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:32.431 [2024-10-07 05:34:36.402436] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:32.691 05:34:36 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:32.691 05:34:36 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:32.949 05:34:36 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:33.208 BaseBdev1 00:15:33.208 05:34:37 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:33.208 05:34:37 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:33.208 05:34:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:33.208 05:34:37 -- common/autotest_common.sh@889 -- # local i 00:15:33.208 05:34:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:33.208 05:34:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:33.208 05:34:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:33.467 05:34:37 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:33.730 [ 00:15:33.730 { 00:15:33.730 "name": "BaseBdev1", 00:15:33.730 "aliases": [ 00:15:33.730 "cbc13662-e3bc-4ff1-b380-85cec2641841" 00:15:33.730 ], 00:15:33.730 "product_name": "Malloc disk", 00:15:33.730 "block_size": 512, 00:15:33.730 "num_blocks": 65536, 00:15:33.730 "uuid": "cbc13662-e3bc-4ff1-b380-85cec2641841", 00:15:33.730 "assigned_rate_limits": { 00:15:33.730 "rw_ios_per_sec": 0, 00:15:33.730 "rw_mbytes_per_sec": 0, 00:15:33.730 "r_mbytes_per_sec": 0, 00:15:33.730 "w_mbytes_per_sec": 0 00:15:33.730 }, 00:15:33.730 "claimed": false, 00:15:33.730 "zoned": false, 00:15:33.730 "supported_io_types": { 00:15:33.730 "read": true, 00:15:33.730 "write": true, 00:15:33.730 "unmap": true, 00:15:33.730 "write_zeroes": true, 00:15:33.730 "flush": true, 00:15:33.730 "reset": true, 00:15:33.730 "compare": false, 00:15:33.730 "compare_and_write": false, 00:15:33.730 "abort": true, 00:15:33.730 "nvme_admin": false, 00:15:33.730 "nvme_io": false 00:15:33.730 }, 00:15:33.730 "memory_domains": [ 00:15:33.730 { 00:15:33.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.730 "dma_device_type": 2 00:15:33.730 } 00:15:33.730 ], 00:15:33.730 "driver_specific": {} 00:15:33.730 } 00:15:33.730 ] 00:15:33.730 05:34:37 -- common/autotest_common.sh@895 -- # return 0 00:15:33.730 05:34:37 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:33.988 [2024-10-07 05:34:37.888312] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:33.989 [2024-10-07 05:34:37.890222] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:33.989 [2024-10-07 05:34:37.890428] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:33.989 05:34:37 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:33.989 05:34:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:33.989 05:34:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:33.989 05:34:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:33.989 05:34:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:33.989 05:34:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:33.989 05:34:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:33.989 05:34:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:33.989 05:34:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:33.989 05:34:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:33.989 05:34:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:33.989 05:34:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:33.989 05:34:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.989 05:34:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.247 05:34:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:34.247 "name": "Existed_Raid", 00:15:34.247 "uuid": "eaf56389-f67e-4cc9-acc4-f3df9f3400c0", 00:15:34.247 "strip_size_kb": 0, 00:15:34.247 "state": "configuring", 00:15:34.247 "raid_level": "raid1", 00:15:34.247 "superblock": true, 00:15:34.247 "num_base_bdevs": 2, 00:15:34.247 "num_base_bdevs_discovered": 1, 00:15:34.247 "num_base_bdevs_operational": 2, 00:15:34.247 "base_bdevs_list": [ 00:15:34.247 { 00:15:34.247 "name": "BaseBdev1", 00:15:34.247 "uuid": "cbc13662-e3bc-4ff1-b380-85cec2641841", 00:15:34.247 "is_configured": true, 00:15:34.247 "data_offset": 2048, 00:15:34.247 "data_size": 63488 00:15:34.247 }, 00:15:34.247 { 00:15:34.247 "name": "BaseBdev2", 00:15:34.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.247 "is_configured": false, 00:15:34.247 "data_offset": 0, 00:15:34.247 "data_size": 0 00:15:34.247 } 00:15:34.247 ] 00:15:34.247 }' 00:15:34.247 05:34:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:34.247 05:34:38 -- common/autotest_common.sh@10 -- # set +x 00:15:35.183 05:34:38 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:35.183 [2024-10-07 05:34:39.102403] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:35.183 [2024-10-07 05:34:39.102902] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:15:35.183 [2024-10-07 05:34:39.103042] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:35.184 [2024-10-07 05:34:39.103204] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:35.184 [2024-10-07 05:34:39.103768] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:15:35.184 [2024-10-07 05:34:39.103916] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:15:35.184 BaseBdev2 00:15:35.184 [2024-10-07 05:34:39.104199] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.184 05:34:39 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:35.184 05:34:39 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:35.184 05:34:39 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:35.184 05:34:39 -- common/autotest_common.sh@889 -- # local i 00:15:35.184 05:34:39 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:35.184 05:34:39 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:35.184 05:34:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:35.443 05:34:39 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:35.702 [ 00:15:35.702 { 00:15:35.702 "name": "BaseBdev2", 00:15:35.702 "aliases": [ 00:15:35.702 "6c034cb2-e093-41e3-abb0-039bc1d2c705" 00:15:35.702 ], 00:15:35.702 "product_name": "Malloc disk", 00:15:35.702 "block_size": 512, 00:15:35.702 "num_blocks": 65536, 00:15:35.702 "uuid": "6c034cb2-e093-41e3-abb0-039bc1d2c705", 00:15:35.702 "assigned_rate_limits": { 00:15:35.702 "rw_ios_per_sec": 0, 00:15:35.702 "rw_mbytes_per_sec": 0, 00:15:35.702 "r_mbytes_per_sec": 0, 00:15:35.702 "w_mbytes_per_sec": 0 00:15:35.702 }, 00:15:35.702 "claimed": true, 00:15:35.702 "claim_type": "exclusive_write", 00:15:35.702 "zoned": false, 00:15:35.702 "supported_io_types": { 00:15:35.702 "read": true, 00:15:35.702 "write": true, 00:15:35.702 "unmap": true, 00:15:35.702 "write_zeroes": true, 00:15:35.702 "flush": true, 00:15:35.702 "reset": true, 00:15:35.702 "compare": false, 00:15:35.702 "compare_and_write": false, 00:15:35.702 "abort": true, 00:15:35.702 "nvme_admin": false, 00:15:35.702 "nvme_io": false 00:15:35.702 }, 00:15:35.702 "memory_domains": [ 00:15:35.702 { 00:15:35.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.702 "dma_device_type": 2 00:15:35.702 } 00:15:35.702 ], 00:15:35.702 "driver_specific": {} 00:15:35.702 } 00:15:35.702 ] 00:15:35.702 05:34:39 -- common/autotest_common.sh@895 -- # return 0 00:15:35.702 05:34:39 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:35.702 05:34:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:35.702 05:34:39 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:35.702 05:34:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:35.702 05:34:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:35.702 05:34:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:35.702 05:34:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:35.702 05:34:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:35.702 05:34:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:35.702 05:34:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:35.702 05:34:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:35.702 05:34:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:35.702 05:34:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.702 05:34:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.960 05:34:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:35.960 "name": "Existed_Raid", 00:15:35.960 "uuid": "eaf56389-f67e-4cc9-acc4-f3df9f3400c0", 00:15:35.960 "strip_size_kb": 0, 00:15:35.960 "state": "online", 00:15:35.960 "raid_level": "raid1", 00:15:35.960 "superblock": true, 00:15:35.960 "num_base_bdevs": 2, 00:15:35.960 "num_base_bdevs_discovered": 2, 00:15:35.960 "num_base_bdevs_operational": 2, 00:15:35.960 "base_bdevs_list": [ 00:15:35.960 { 00:15:35.960 "name": "BaseBdev1", 00:15:35.960 "uuid": "cbc13662-e3bc-4ff1-b380-85cec2641841", 00:15:35.960 "is_configured": true, 00:15:35.960 "data_offset": 2048, 00:15:35.960 "data_size": 63488 00:15:35.960 }, 00:15:35.960 { 00:15:35.960 "name": "BaseBdev2", 00:15:35.960 "uuid": "6c034cb2-e093-41e3-abb0-039bc1d2c705", 00:15:35.960 "is_configured": true, 00:15:35.960 "data_offset": 2048, 00:15:35.960 "data_size": 63488 00:15:35.960 } 00:15:35.960 ] 00:15:35.960 }' 00:15:35.960 05:34:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:35.960 05:34:39 -- common/autotest_common.sh@10 -- # set +x 00:15:36.528 05:34:40 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:36.787 [2024-10-07 05:34:40.566790] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:36.787 05:34:40 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:36.787 05:34:40 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:15:36.787 05:34:40 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:36.787 05:34:40 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:36.787 05:34:40 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:15:36.787 05:34:40 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:36.787 05:34:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:36.787 05:34:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:36.787 05:34:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:36.787 05:34:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:36.787 05:34:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:36.787 05:34:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:36.787 05:34:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:36.787 05:34:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:36.787 05:34:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:36.787 05:34:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.787 05:34:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.046 05:34:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:37.046 "name": "Existed_Raid", 00:15:37.046 "uuid": "eaf56389-f67e-4cc9-acc4-f3df9f3400c0", 00:15:37.046 "strip_size_kb": 0, 00:15:37.046 "state": "online", 00:15:37.046 "raid_level": "raid1", 00:15:37.046 "superblock": true, 00:15:37.046 "num_base_bdevs": 2, 00:15:37.046 "num_base_bdevs_discovered": 1, 00:15:37.046 "num_base_bdevs_operational": 1, 00:15:37.046 "base_bdevs_list": [ 00:15:37.046 { 00:15:37.046 "name": null, 00:15:37.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.047 "is_configured": false, 00:15:37.047 "data_offset": 2048, 00:15:37.047 "data_size": 63488 00:15:37.047 }, 00:15:37.047 { 00:15:37.047 "name": "BaseBdev2", 00:15:37.047 "uuid": "6c034cb2-e093-41e3-abb0-039bc1d2c705", 00:15:37.047 "is_configured": true, 00:15:37.047 "data_offset": 2048, 00:15:37.047 "data_size": 63488 00:15:37.047 } 00:15:37.047 ] 00:15:37.047 }' 00:15:37.047 05:34:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:37.047 05:34:40 -- common/autotest_common.sh@10 -- # set +x 00:15:37.639 05:34:41 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:37.639 05:34:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:37.639 05:34:41 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.639 05:34:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:37.639 05:34:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:37.639 05:34:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.639 05:34:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:37.928 [2024-10-07 05:34:41.852262] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:37.928 [2024-10-07 05:34:41.852448] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.929 [2024-10-07 05:34:41.852626] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:38.194 [2024-10-07 05:34:41.915813] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:38.194 [2024-10-07 05:34:41.916020] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:15:38.194 05:34:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:38.194 05:34:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:38.194 05:34:41 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.194 05:34:41 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:38.194 05:34:42 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:38.194 05:34:42 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:38.194 05:34:42 -- bdev/bdev_raid.sh@287 -- # killprocess 139598 00:15:38.194 05:34:42 -- common/autotest_common.sh@926 -- # '[' -z 139598 ']' 00:15:38.194 05:34:42 -- common/autotest_common.sh@930 -- # kill -0 139598 00:15:38.194 05:34:42 -- common/autotest_common.sh@931 -- # uname 00:15:38.194 05:34:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:38.194 05:34:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 139598 00:15:38.194 killing process with pid 139598 00:15:38.194 05:34:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:38.194 05:34:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:38.194 05:34:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 139598' 00:15:38.194 05:34:42 -- common/autotest_common.sh@945 -- # kill 139598 00:15:38.194 [2024-10-07 05:34:42.153506] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:38.194 05:34:42 -- common/autotest_common.sh@950 -- # wait 139598 00:15:38.194 [2024-10-07 05:34:42.153599] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:39.130 ************************************ 00:15:39.130 END TEST raid_state_function_test_sb 00:15:39.130 ************************************ 00:15:39.130 05:34:43 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:39.130 00:15:39.130 real 0m10.942s 00:15:39.130 user 0m18.977s 00:15:39.130 sys 0m1.396s 00:15:39.130 05:34:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:39.130 05:34:43 -- common/autotest_common.sh@10 -- # set +x 00:15:39.130 05:34:43 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:15:39.130 05:34:43 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:39.130 05:34:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:39.130 05:34:43 -- common/autotest_common.sh@10 -- # set +x 00:15:39.389 ************************************ 00:15:39.389 START TEST raid_superblock_test 00:15:39.389 ************************************ 00:15:39.389 05:34:43 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 2 00:15:39.389 05:34:43 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:15:39.389 05:34:43 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:15:39.389 05:34:43 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:39.389 05:34:43 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:39.389 05:34:43 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:39.389 05:34:43 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:39.389 05:34:43 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:39.389 05:34:43 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:39.389 05:34:43 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:39.389 05:34:43 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:39.389 05:34:43 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:39.389 05:34:43 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:39.389 05:34:43 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:39.389 05:34:43 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:15:39.389 05:34:43 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:15:39.389 05:34:43 -- bdev/bdev_raid.sh@357 -- # raid_pid=140316 00:15:39.389 05:34:43 -- bdev/bdev_raid.sh@358 -- # waitforlisten 140316 /var/tmp/spdk-raid.sock 00:15:39.389 05:34:43 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:39.389 05:34:43 -- common/autotest_common.sh@819 -- # '[' -z 140316 ']' 00:15:39.389 05:34:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:39.389 05:34:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:39.389 05:34:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:39.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:39.389 05:34:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:39.389 05:34:43 -- common/autotest_common.sh@10 -- # set +x 00:15:39.389 [2024-10-07 05:34:43.196258] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:39.389 [2024-10-07 05:34:43.196780] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140316 ] 00:15:39.389 [2024-10-07 05:34:43.365066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.648 [2024-10-07 05:34:43.529574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.906 [2024-10-07 05:34:43.692607] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:40.165 05:34:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:40.165 05:34:44 -- common/autotest_common.sh@852 -- # return 0 00:15:40.165 05:34:44 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:40.165 05:34:44 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:40.165 05:34:44 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:40.165 05:34:44 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:40.165 05:34:44 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:40.165 05:34:44 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:40.165 05:34:44 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:40.165 05:34:44 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:40.165 05:34:44 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:40.424 malloc1 00:15:40.424 05:34:44 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:40.683 [2024-10-07 05:34:44.636026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:40.683 [2024-10-07 05:34:44.636360] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.683 [2024-10-07 05:34:44.636450] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:15:40.683 [2024-10-07 05:34:44.636622] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.683 [2024-10-07 05:34:44.639093] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.683 [2024-10-07 05:34:44.639279] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:40.683 pt1 00:15:40.683 05:34:44 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:40.683 05:34:44 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:40.683 05:34:44 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:40.683 05:34:44 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:40.683 05:34:44 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:40.683 05:34:44 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:40.683 05:34:44 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:40.683 05:34:44 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:40.683 05:34:44 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:40.942 malloc2 00:15:40.942 05:34:44 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:41.201 [2024-10-07 05:34:45.145122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:41.201 [2024-10-07 05:34:45.145498] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.201 [2024-10-07 05:34:45.145592] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:41.201 [2024-10-07 05:34:45.145952] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.201 [2024-10-07 05:34:45.148359] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.201 [2024-10-07 05:34:45.148549] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:41.201 pt2 00:15:41.201 05:34:45 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:41.201 05:34:45 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:41.201 05:34:45 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:15:41.460 [2024-10-07 05:34:45.413372] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:41.460 [2024-10-07 05:34:45.415566] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:41.460 [2024-10-07 05:34:45.415959] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80 00:15:41.460 [2024-10-07 05:34:45.416130] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:41.460 [2024-10-07 05:34:45.416353] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:15:41.460 [2024-10-07 05:34:45.417006] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80 00:15:41.460 [2024-10-07 05:34:45.417156] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80 00:15:41.460 [2024-10-07 05:34:45.417529] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.460 05:34:45 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:41.460 05:34:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:41.460 05:34:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:41.460 05:34:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:41.460 05:34:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:41.460 05:34:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:41.460 05:34:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:41.460 05:34:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:41.460 05:34:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:41.460 05:34:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:41.460 05:34:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.719 05:34:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.719 05:34:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:41.719 "name": "raid_bdev1", 00:15:41.719 "uuid": "171d37e4-1826-4e8a-a80c-1af0ac173db0", 00:15:41.719 "strip_size_kb": 0, 00:15:41.719 "state": "online", 00:15:41.719 "raid_level": "raid1", 00:15:41.719 "superblock": true, 00:15:41.719 "num_base_bdevs": 2, 00:15:41.719 "num_base_bdevs_discovered": 2, 00:15:41.719 "num_base_bdevs_operational": 2, 00:15:41.719 "base_bdevs_list": [ 00:15:41.719 { 00:15:41.719 "name": "pt1", 00:15:41.719 "uuid": "e65e4658-41ab-5bef-99a6-83292d5fb5e7", 00:15:41.719 "is_configured": true, 00:15:41.719 "data_offset": 2048, 00:15:41.719 "data_size": 63488 00:15:41.719 }, 00:15:41.719 { 00:15:41.719 "name": "pt2", 00:15:41.719 "uuid": "628b6e19-baaa-546f-a688-07159707971a", 00:15:41.719 "is_configured": true, 00:15:41.719 "data_offset": 2048, 00:15:41.719 "data_size": 63488 00:15:41.719 } 00:15:41.719 ] 00:15:41.719 }' 00:15:41.719 05:34:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:41.719 05:34:45 -- common/autotest_common.sh@10 -- # set +x 00:15:42.287 05:34:46 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:42.287 05:34:46 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:42.545 [2024-10-07 05:34:46.405802] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:42.545 05:34:46 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=171d37e4-1826-4e8a-a80c-1af0ac173db0 00:15:42.545 05:34:46 -- bdev/bdev_raid.sh@380 -- # '[' -z 171d37e4-1826-4e8a-a80c-1af0ac173db0 ']' 00:15:42.545 05:34:46 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:42.804 [2024-10-07 05:34:46.597634] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:42.804 [2024-10-07 05:34:46.597786] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:42.804 [2024-10-07 05:34:46.597993] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:42.804 [2024-10-07 05:34:46.598193] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:42.804 [2024-10-07 05:34:46.598316] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline 00:15:42.804 05:34:46 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.804 05:34:46 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:43.063 05:34:46 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:43.063 05:34:46 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:43.063 05:34:46 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:43.063 05:34:46 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:43.322 05:34:47 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:43.322 05:34:47 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:43.582 05:34:47 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:43.582 05:34:47 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:43.582 05:34:47 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:43.582 05:34:47 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:43.582 05:34:47 -- common/autotest_common.sh@640 -- # local es=0 00:15:43.582 05:34:47 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:43.582 05:34:47 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:43.582 05:34:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:43.582 05:34:47 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:43.582 05:34:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:43.582 05:34:47 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:43.582 05:34:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:43.582 05:34:47 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:43.582 05:34:47 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:43.582 05:34:47 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:43.841 [2024-10-07 05:34:47.701788] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:43.841 [2024-10-07 05:34:47.703918] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:43.841 [2024-10-07 05:34:47.704138] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:43.841 [2024-10-07 05:34:47.704347] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:43.841 [2024-10-07 05:34:47.704520] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:43.841 [2024-10-07 05:34:47.704636] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring 00:15:43.841 request: 00:15:43.841 { 00:15:43.841 "name": "raid_bdev1", 00:15:43.841 "raid_level": "raid1", 00:15:43.841 "base_bdevs": [ 00:15:43.841 "malloc1", 00:15:43.841 "malloc2" 00:15:43.841 ], 00:15:43.841 "superblock": false, 00:15:43.841 "method": "bdev_raid_create", 00:15:43.841 "req_id": 1 00:15:43.841 } 00:15:43.841 Got JSON-RPC error response 00:15:43.841 response: 00:15:43.841 { 00:15:43.841 "code": -17, 00:15:43.841 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:43.841 } 00:15:43.841 05:34:47 -- common/autotest_common.sh@643 -- # es=1 00:15:43.841 05:34:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:43.841 05:34:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:43.841 05:34:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:43.841 05:34:47 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.841 05:34:47 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:44.099 05:34:47 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:44.099 05:34:47 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:44.099 05:34:47 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:44.358 [2024-10-07 05:34:48.081810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:44.358 [2024-10-07 05:34:48.082041] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.358 [2024-10-07 05:34:48.082128] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:44.358 [2024-10-07 05:34:48.082266] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.358 [2024-10-07 05:34:48.084505] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.358 [2024-10-07 05:34:48.084685] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:44.358 [2024-10-07 05:34:48.084954] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:44.358 [2024-10-07 05:34:48.085116] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:44.358 pt1 00:15:44.358 05:34:48 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:44.358 05:34:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:44.358 05:34:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:44.358 05:34:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:44.358 05:34:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:44.358 05:34:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:44.358 05:34:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:44.358 05:34:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:44.358 05:34:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:44.358 05:34:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:44.358 05:34:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.358 05:34:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.615 05:34:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:44.615 "name": "raid_bdev1", 00:15:44.615 "uuid": "171d37e4-1826-4e8a-a80c-1af0ac173db0", 00:15:44.615 "strip_size_kb": 0, 00:15:44.615 "state": "configuring", 00:15:44.615 "raid_level": "raid1", 00:15:44.615 "superblock": true, 00:15:44.615 "num_base_bdevs": 2, 00:15:44.615 "num_base_bdevs_discovered": 1, 00:15:44.615 "num_base_bdevs_operational": 2, 00:15:44.615 "base_bdevs_list": [ 00:15:44.615 { 00:15:44.615 "name": "pt1", 00:15:44.615 "uuid": "e65e4658-41ab-5bef-99a6-83292d5fb5e7", 00:15:44.615 "is_configured": true, 00:15:44.615 "data_offset": 2048, 00:15:44.615 "data_size": 63488 00:15:44.615 }, 00:15:44.615 { 00:15:44.615 "name": null, 00:15:44.615 "uuid": "628b6e19-baaa-546f-a688-07159707971a", 00:15:44.615 "is_configured": false, 00:15:44.615 "data_offset": 2048, 00:15:44.615 "data_size": 63488 00:15:44.615 } 00:15:44.615 ] 00:15:44.615 }' 00:15:44.615 05:34:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:44.615 05:34:48 -- common/autotest_common.sh@10 -- # set +x 00:15:45.182 05:34:48 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:15:45.182 05:34:48 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:45.182 05:34:48 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:45.182 05:34:48 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:45.441 [2024-10-07 05:34:49.178050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:45.441 [2024-10-07 05:34:49.178331] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.441 [2024-10-07 05:34:49.178415] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:45.441 [2024-10-07 05:34:49.178742] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.441 [2024-10-07 05:34:49.179212] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.441 [2024-10-07 05:34:49.179392] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:45.441 [2024-10-07 05:34:49.179600] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:45.441 [2024-10-07 05:34:49.179737] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:45.441 [2024-10-07 05:34:49.179901] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:15:45.441 [2024-10-07 05:34:49.180064] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:45.441 [2024-10-07 05:34:49.180221] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:15:45.441 [2024-10-07 05:34:49.180694] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:15:45.441 [2024-10-07 05:34:49.180833] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:15:45.441 [2024-10-07 05:34:49.181071] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.441 pt2 00:15:45.441 05:34:49 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:45.441 05:34:49 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:45.441 05:34:49 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:45.441 05:34:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:45.441 05:34:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:45.441 05:34:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:45.441 05:34:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:45.441 05:34:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:45.441 05:34:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:45.441 05:34:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:45.441 05:34:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:45.441 05:34:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:45.441 05:34:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.441 05:34:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.441 05:34:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:45.441 "name": "raid_bdev1", 00:15:45.441 "uuid": "171d37e4-1826-4e8a-a80c-1af0ac173db0", 00:15:45.441 "strip_size_kb": 0, 00:15:45.441 "state": "online", 00:15:45.441 "raid_level": "raid1", 00:15:45.441 "superblock": true, 00:15:45.441 "num_base_bdevs": 2, 00:15:45.441 "num_base_bdevs_discovered": 2, 00:15:45.441 "num_base_bdevs_operational": 2, 00:15:45.441 "base_bdevs_list": [ 00:15:45.441 { 00:15:45.441 "name": "pt1", 00:15:45.441 "uuid": "e65e4658-41ab-5bef-99a6-83292d5fb5e7", 00:15:45.441 "is_configured": true, 00:15:45.441 "data_offset": 2048, 00:15:45.441 "data_size": 63488 00:15:45.441 }, 00:15:45.441 { 00:15:45.441 "name": "pt2", 00:15:45.441 "uuid": "628b6e19-baaa-546f-a688-07159707971a", 00:15:45.441 "is_configured": true, 00:15:45.441 "data_offset": 2048, 00:15:45.441 "data_size": 63488 00:15:45.441 } 00:15:45.441 ] 00:15:45.441 }' 00:15:45.441 05:34:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:45.441 05:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:46.008 05:34:49 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:46.008 05:34:49 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:46.266 [2024-10-07 05:34:50.210395] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:46.266 05:34:50 -- bdev/bdev_raid.sh@430 -- # '[' 171d37e4-1826-4e8a-a80c-1af0ac173db0 '!=' 171d37e4-1826-4e8a-a80c-1af0ac173db0 ']' 00:15:46.266 05:34:50 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:15:46.266 05:34:50 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:46.266 05:34:50 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:46.266 05:34:50 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:46.525 [2024-10-07 05:34:50.410275] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:46.525 05:34:50 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:46.525 05:34:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:46.525 05:34:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:46.525 05:34:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:46.525 05:34:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:46.525 05:34:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:46.525 05:34:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:46.525 05:34:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:46.525 05:34:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:46.525 05:34:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:46.525 05:34:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.525 05:34:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.783 05:34:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:46.783 "name": "raid_bdev1", 00:15:46.783 "uuid": "171d37e4-1826-4e8a-a80c-1af0ac173db0", 00:15:46.783 "strip_size_kb": 0, 00:15:46.783 "state": "online", 00:15:46.783 "raid_level": "raid1", 00:15:46.783 "superblock": true, 00:15:46.783 "num_base_bdevs": 2, 00:15:46.783 "num_base_bdevs_discovered": 1, 00:15:46.783 "num_base_bdevs_operational": 1, 00:15:46.783 "base_bdevs_list": [ 00:15:46.783 { 00:15:46.783 "name": null, 00:15:46.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.783 "is_configured": false, 00:15:46.783 "data_offset": 2048, 00:15:46.783 "data_size": 63488 00:15:46.783 }, 00:15:46.783 { 00:15:46.783 "name": "pt2", 00:15:46.783 "uuid": "628b6e19-baaa-546f-a688-07159707971a", 00:15:46.783 "is_configured": true, 00:15:46.783 "data_offset": 2048, 00:15:46.783 "data_size": 63488 00:15:46.783 } 00:15:46.783 ] 00:15:46.783 }' 00:15:46.783 05:34:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:46.783 05:34:50 -- common/autotest_common.sh@10 -- # set +x 00:15:47.349 05:34:51 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:47.608 [2024-10-07 05:34:51.470488] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:47.608 [2024-10-07 05:34:51.472268] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:47.608 [2024-10-07 05:34:51.472812] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:47.608 [2024-10-07 05:34:51.473189] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:47.608 [2024-10-07 05:34:51.473440] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:15:47.608 05:34:51 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.608 05:34:51 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:15:47.866 05:34:51 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:15:47.866 05:34:51 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:15:47.866 05:34:51 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:15:47.866 05:34:51 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:15:47.866 05:34:51 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:48.123 05:34:51 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:15:48.123 05:34:51 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:15:48.123 05:34:51 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:15:48.123 05:34:51 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:15:48.124 05:34:51 -- bdev/bdev_raid.sh@462 -- # i=1 00:15:48.124 05:34:51 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:48.382 [2024-10-07 05:34:52.254926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:48.382 [2024-10-07 05:34:52.255326] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.382 [2024-10-07 05:34:52.255488] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:48.382 [2024-10-07 05:34:52.255622] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.382 [2024-10-07 05:34:52.258449] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.382 [2024-10-07 05:34:52.258714] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:48.382 [2024-10-07 05:34:52.259029] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:48.382 [2024-10-07 05:34:52.259261] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:48.382 [2024-10-07 05:34:52.259584] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:15:48.382 [2024-10-07 05:34:52.259735] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:48.382 [2024-10-07 05:34:52.259896] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:48.382 [2024-10-07 05:34:52.260386] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:15:48.382 [2024-10-07 05:34:52.260522] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:15:48.382 pt2 00:15:48.382 [2024-10-07 05:34:52.260786] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.382 05:34:52 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:48.382 05:34:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:48.382 05:34:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:48.382 05:34:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:48.382 05:34:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:48.382 05:34:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:48.382 05:34:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:48.382 05:34:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:48.382 05:34:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:48.382 05:34:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:48.382 05:34:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.382 05:34:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.640 05:34:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:48.640 "name": "raid_bdev1", 00:15:48.640 "uuid": "171d37e4-1826-4e8a-a80c-1af0ac173db0", 00:15:48.640 "strip_size_kb": 0, 00:15:48.640 "state": "online", 00:15:48.641 "raid_level": "raid1", 00:15:48.641 "superblock": true, 00:15:48.641 "num_base_bdevs": 2, 00:15:48.641 "num_base_bdevs_discovered": 1, 00:15:48.641 "num_base_bdevs_operational": 1, 00:15:48.641 "base_bdevs_list": [ 00:15:48.641 { 00:15:48.641 "name": null, 00:15:48.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.641 "is_configured": false, 00:15:48.641 "data_offset": 2048, 00:15:48.641 "data_size": 63488 00:15:48.641 }, 00:15:48.641 { 00:15:48.641 "name": "pt2", 00:15:48.641 "uuid": "628b6e19-baaa-546f-a688-07159707971a", 00:15:48.641 "is_configured": true, 00:15:48.641 "data_offset": 2048, 00:15:48.641 "data_size": 63488 00:15:48.641 } 00:15:48.641 ] 00:15:48.641 }' 00:15:48.641 05:34:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:48.641 05:34:52 -- common/autotest_common.sh@10 -- # set +x 00:15:49.576 05:34:53 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:15:49.576 05:34:53 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:49.576 05:34:53 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:15:49.576 [2024-10-07 05:34:53.491611] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:49.576 05:34:53 -- bdev/bdev_raid.sh@506 -- # '[' 171d37e4-1826-4e8a-a80c-1af0ac173db0 '!=' 171d37e4-1826-4e8a-a80c-1af0ac173db0 ']' 00:15:49.576 05:34:53 -- bdev/bdev_raid.sh@511 -- # killprocess 140316 00:15:49.576 05:34:53 -- common/autotest_common.sh@926 -- # '[' -z 140316 ']' 00:15:49.576 05:34:53 -- common/autotest_common.sh@930 -- # kill -0 140316 00:15:49.576 05:34:53 -- common/autotest_common.sh@931 -- # uname 00:15:49.576 05:34:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:49.576 05:34:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 140316 00:15:49.576 killing process with pid 140316 00:15:49.576 05:34:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:49.576 05:34:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:49.576 05:34:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 140316' 00:15:49.576 05:34:53 -- common/autotest_common.sh@945 -- # kill 140316 00:15:49.576 [2024-10-07 05:34:53.537881] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:49.576 05:34:53 -- common/autotest_common.sh@950 -- # wait 140316 00:15:49.576 [2024-10-07 05:34:53.537968] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:49.576 [2024-10-07 05:34:53.538029] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:49.576 [2024-10-07 05:34:53.538040] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:15:49.835 [2024-10-07 05:34:53.687699] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:50.770 ************************************ 00:15:50.770 END TEST raid_superblock_test 00:15:50.770 ************************************ 00:15:50.770 05:34:54 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:50.770 00:15:50.770 real 0m11.631s 00:15:50.770 user 0m20.491s 00:15:50.770 sys 0m1.411s 00:15:50.770 05:34:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:50.770 05:34:54 -- common/autotest_common.sh@10 -- # set +x 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:15:51.029 05:34:54 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:51.029 05:34:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:51.029 05:34:54 -- common/autotest_common.sh@10 -- # set +x 00:15:51.029 ************************************ 00:15:51.029 START TEST raid_state_function_test 00:15:51.029 ************************************ 00:15:51.029 05:34:54 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 false 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@226 -- # raid_pid=141150 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 141150' 00:15:51.029 Process raid pid: 141150 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:51.029 05:34:54 -- bdev/bdev_raid.sh@228 -- # waitforlisten 141150 /var/tmp/spdk-raid.sock 00:15:51.029 05:34:54 -- common/autotest_common.sh@819 -- # '[' -z 141150 ']' 00:15:51.029 05:34:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:51.029 05:34:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:51.029 05:34:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:51.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:51.029 05:34:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:51.029 05:34:54 -- common/autotest_common.sh@10 -- # set +x 00:15:51.029 [2024-10-07 05:34:54.885858] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:51.029 [2024-10-07 05:34:54.886390] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.289 [2024-10-07 05:34:55.056997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.289 [2024-10-07 05:34:55.258748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.548 [2024-10-07 05:34:55.460447] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:52.115 05:34:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:52.115 05:34:55 -- common/autotest_common.sh@852 -- # return 0 00:15:52.115 05:34:55 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:52.375 [2024-10-07 05:34:56.130183] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:52.375 [2024-10-07 05:34:56.130414] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:52.375 [2024-10-07 05:34:56.130597] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:52.375 [2024-10-07 05:34:56.130669] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:52.375 [2024-10-07 05:34:56.130908] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:52.375 [2024-10-07 05:34:56.131014] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:52.375 05:34:56 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:52.375 05:34:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:52.375 05:34:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:52.375 05:34:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:52.375 05:34:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:52.375 05:34:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:52.375 05:34:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:52.375 05:34:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:52.375 05:34:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:52.375 05:34:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:52.375 05:34:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.375 05:34:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.642 05:34:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:52.642 "name": "Existed_Raid", 00:15:52.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.642 "strip_size_kb": 64, 00:15:52.642 "state": "configuring", 00:15:52.642 "raid_level": "raid0", 00:15:52.642 "superblock": false, 00:15:52.642 "num_base_bdevs": 3, 00:15:52.642 "num_base_bdevs_discovered": 0, 00:15:52.642 "num_base_bdevs_operational": 3, 00:15:52.642 "base_bdevs_list": [ 00:15:52.642 { 00:15:52.642 "name": "BaseBdev1", 00:15:52.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.642 "is_configured": false, 00:15:52.642 "data_offset": 0, 00:15:52.642 "data_size": 0 00:15:52.642 }, 00:15:52.642 { 00:15:52.642 "name": "BaseBdev2", 00:15:52.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.642 "is_configured": false, 00:15:52.642 "data_offset": 0, 00:15:52.642 "data_size": 0 00:15:52.642 }, 00:15:52.643 { 00:15:52.643 "name": "BaseBdev3", 00:15:52.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.643 "is_configured": false, 00:15:52.643 "data_offset": 0, 00:15:52.643 "data_size": 0 00:15:52.643 } 00:15:52.643 ] 00:15:52.643 }' 00:15:52.643 05:34:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:52.643 05:34:56 -- common/autotest_common.sh@10 -- # set +x 00:15:53.229 05:34:57 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:53.489 [2024-10-07 05:34:57.314262] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:53.489 [2024-10-07 05:34:57.314446] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:15:53.489 05:34:57 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:53.748 [2024-10-07 05:34:57.578375] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:53.748 [2024-10-07 05:34:57.578632] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:53.748 [2024-10-07 05:34:57.578738] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:53.748 [2024-10-07 05:34:57.578808] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:53.748 [2024-10-07 05:34:57.578901] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:53.748 [2024-10-07 05:34:57.579073] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:53.748 05:34:57 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:54.007 [2024-10-07 05:34:57.809307] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.007 BaseBdev1 00:15:54.007 05:34:57 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:54.007 05:34:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:54.007 05:34:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:54.007 05:34:57 -- common/autotest_common.sh@889 -- # local i 00:15:54.007 05:34:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:54.007 05:34:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:54.007 05:34:57 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:54.266 05:34:58 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:54.525 [ 00:15:54.525 { 00:15:54.525 "name": "BaseBdev1", 00:15:54.525 "aliases": [ 00:15:54.525 "a85cc75c-1507-4343-8a1e-771a163f878c" 00:15:54.525 ], 00:15:54.525 "product_name": "Malloc disk", 00:15:54.525 "block_size": 512, 00:15:54.525 "num_blocks": 65536, 00:15:54.525 "uuid": "a85cc75c-1507-4343-8a1e-771a163f878c", 00:15:54.525 "assigned_rate_limits": { 00:15:54.525 "rw_ios_per_sec": 0, 00:15:54.525 "rw_mbytes_per_sec": 0, 00:15:54.525 "r_mbytes_per_sec": 0, 00:15:54.525 "w_mbytes_per_sec": 0 00:15:54.525 }, 00:15:54.525 "claimed": true, 00:15:54.525 "claim_type": "exclusive_write", 00:15:54.525 "zoned": false, 00:15:54.525 "supported_io_types": { 00:15:54.525 "read": true, 00:15:54.525 "write": true, 00:15:54.525 "unmap": true, 00:15:54.525 "write_zeroes": true, 00:15:54.525 "flush": true, 00:15:54.525 "reset": true, 00:15:54.525 "compare": false, 00:15:54.525 "compare_and_write": false, 00:15:54.525 "abort": true, 00:15:54.525 "nvme_admin": false, 00:15:54.525 "nvme_io": false 00:15:54.525 }, 00:15:54.525 "memory_domains": [ 00:15:54.525 { 00:15:54.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.525 "dma_device_type": 2 00:15:54.525 } 00:15:54.525 ], 00:15:54.525 "driver_specific": {} 00:15:54.525 } 00:15:54.525 ] 00:15:54.525 05:34:58 -- common/autotest_common.sh@895 -- # return 0 00:15:54.525 05:34:58 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:54.525 05:34:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:54.525 05:34:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:54.525 05:34:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:54.525 05:34:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:54.525 05:34:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:54.525 05:34:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:54.525 05:34:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:54.525 05:34:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:54.525 05:34:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:54.525 05:34:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.525 05:34:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.525 05:34:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:54.525 "name": "Existed_Raid", 00:15:54.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.525 "strip_size_kb": 64, 00:15:54.525 "state": "configuring", 00:15:54.525 "raid_level": "raid0", 00:15:54.525 "superblock": false, 00:15:54.525 "num_base_bdevs": 3, 00:15:54.525 "num_base_bdevs_discovered": 1, 00:15:54.525 "num_base_bdevs_operational": 3, 00:15:54.525 "base_bdevs_list": [ 00:15:54.525 { 00:15:54.525 "name": "BaseBdev1", 00:15:54.525 "uuid": "a85cc75c-1507-4343-8a1e-771a163f878c", 00:15:54.525 "is_configured": true, 00:15:54.525 "data_offset": 0, 00:15:54.525 "data_size": 65536 00:15:54.525 }, 00:15:54.525 { 00:15:54.525 "name": "BaseBdev2", 00:15:54.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.525 "is_configured": false, 00:15:54.525 "data_offset": 0, 00:15:54.525 "data_size": 0 00:15:54.525 }, 00:15:54.525 { 00:15:54.526 "name": "BaseBdev3", 00:15:54.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.526 "is_configured": false, 00:15:54.526 "data_offset": 0, 00:15:54.526 "data_size": 0 00:15:54.526 } 00:15:54.526 ] 00:15:54.526 }' 00:15:54.526 05:34:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:54.526 05:34:58 -- common/autotest_common.sh@10 -- # set +x 00:15:55.462 05:34:59 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:55.462 [2024-10-07 05:34:59.293663] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:55.462 [2024-10-07 05:34:59.293888] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:55.462 05:34:59 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:55.462 05:34:59 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:55.721 [2024-10-07 05:34:59.489724] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.721 [2024-10-07 05:34:59.491979] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:55.721 [2024-10-07 05:34:59.492192] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:55.721 [2024-10-07 05:34:59.492308] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:55.721 [2024-10-07 05:34:59.492397] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:55.721 05:34:59 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:55.721 05:34:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:55.721 05:34:59 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:55.721 05:34:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:55.721 05:34:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:55.721 05:34:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:55.721 05:34:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:55.721 05:34:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:55.721 05:34:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:55.721 05:34:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:55.721 05:34:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:55.721 05:34:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:55.721 05:34:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.721 05:34:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.981 05:34:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:55.981 "name": "Existed_Raid", 00:15:55.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.981 "strip_size_kb": 64, 00:15:55.981 "state": "configuring", 00:15:55.981 "raid_level": "raid0", 00:15:55.981 "superblock": false, 00:15:55.981 "num_base_bdevs": 3, 00:15:55.981 "num_base_bdevs_discovered": 1, 00:15:55.981 "num_base_bdevs_operational": 3, 00:15:55.981 "base_bdevs_list": [ 00:15:55.981 { 00:15:55.981 "name": "BaseBdev1", 00:15:55.981 "uuid": "a85cc75c-1507-4343-8a1e-771a163f878c", 00:15:55.981 "is_configured": true, 00:15:55.981 "data_offset": 0, 00:15:55.981 "data_size": 65536 00:15:55.981 }, 00:15:55.981 { 00:15:55.981 "name": "BaseBdev2", 00:15:55.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.981 "is_configured": false, 00:15:55.981 "data_offset": 0, 00:15:55.981 "data_size": 0 00:15:55.981 }, 00:15:55.981 { 00:15:55.981 "name": "BaseBdev3", 00:15:55.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.981 "is_configured": false, 00:15:55.981 "data_offset": 0, 00:15:55.981 "data_size": 0 00:15:55.981 } 00:15:55.981 ] 00:15:55.981 }' 00:15:55.981 05:34:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:55.981 05:34:59 -- common/autotest_common.sh@10 -- # set +x 00:15:56.548 05:35:00 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:56.806 [2024-10-07 05:35:00.701064] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:56.806 BaseBdev2 00:15:56.806 05:35:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:56.806 05:35:00 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:56.806 05:35:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:56.806 05:35:00 -- common/autotest_common.sh@889 -- # local i 00:15:56.806 05:35:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:56.806 05:35:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:56.806 05:35:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:57.065 05:35:00 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:57.323 [ 00:15:57.323 { 00:15:57.323 "name": "BaseBdev2", 00:15:57.323 "aliases": [ 00:15:57.323 "d199ccfc-b4d7-4272-8e3b-ab16bf713075" 00:15:57.323 ], 00:15:57.323 "product_name": "Malloc disk", 00:15:57.323 "block_size": 512, 00:15:57.323 "num_blocks": 65536, 00:15:57.323 "uuid": "d199ccfc-b4d7-4272-8e3b-ab16bf713075", 00:15:57.323 "assigned_rate_limits": { 00:15:57.323 "rw_ios_per_sec": 0, 00:15:57.323 "rw_mbytes_per_sec": 0, 00:15:57.323 "r_mbytes_per_sec": 0, 00:15:57.323 "w_mbytes_per_sec": 0 00:15:57.323 }, 00:15:57.323 "claimed": true, 00:15:57.323 "claim_type": "exclusive_write", 00:15:57.323 "zoned": false, 00:15:57.323 "supported_io_types": { 00:15:57.323 "read": true, 00:15:57.323 "write": true, 00:15:57.323 "unmap": true, 00:15:57.323 "write_zeroes": true, 00:15:57.323 "flush": true, 00:15:57.323 "reset": true, 00:15:57.323 "compare": false, 00:15:57.323 "compare_and_write": false, 00:15:57.323 "abort": true, 00:15:57.323 "nvme_admin": false, 00:15:57.323 "nvme_io": false 00:15:57.323 }, 00:15:57.323 "memory_domains": [ 00:15:57.323 { 00:15:57.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.323 "dma_device_type": 2 00:15:57.323 } 00:15:57.323 ], 00:15:57.323 "driver_specific": {} 00:15:57.323 } 00:15:57.323 ] 00:15:57.323 05:35:01 -- common/autotest_common.sh@895 -- # return 0 00:15:57.323 05:35:01 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:57.323 05:35:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:57.323 05:35:01 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:57.323 05:35:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:57.323 05:35:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:57.323 05:35:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:57.323 05:35:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:57.323 05:35:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:57.323 05:35:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:57.323 05:35:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:57.323 05:35:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:57.323 05:35:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:57.323 05:35:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.323 05:35:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.582 05:35:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:57.582 "name": "Existed_Raid", 00:15:57.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.582 "strip_size_kb": 64, 00:15:57.582 "state": "configuring", 00:15:57.582 "raid_level": "raid0", 00:15:57.582 "superblock": false, 00:15:57.582 "num_base_bdevs": 3, 00:15:57.582 "num_base_bdevs_discovered": 2, 00:15:57.582 "num_base_bdevs_operational": 3, 00:15:57.582 "base_bdevs_list": [ 00:15:57.582 { 00:15:57.582 "name": "BaseBdev1", 00:15:57.582 "uuid": "a85cc75c-1507-4343-8a1e-771a163f878c", 00:15:57.582 "is_configured": true, 00:15:57.582 "data_offset": 0, 00:15:57.582 "data_size": 65536 00:15:57.582 }, 00:15:57.582 { 00:15:57.582 "name": "BaseBdev2", 00:15:57.582 "uuid": "d199ccfc-b4d7-4272-8e3b-ab16bf713075", 00:15:57.582 "is_configured": true, 00:15:57.582 "data_offset": 0, 00:15:57.582 "data_size": 65536 00:15:57.582 }, 00:15:57.582 { 00:15:57.582 "name": "BaseBdev3", 00:15:57.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.582 "is_configured": false, 00:15:57.582 "data_offset": 0, 00:15:57.582 "data_size": 0 00:15:57.582 } 00:15:57.582 ] 00:15:57.582 }' 00:15:57.582 05:35:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:57.582 05:35:01 -- common/autotest_common.sh@10 -- # set +x 00:15:58.149 05:35:02 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:58.407 [2024-10-07 05:35:02.284494] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:58.407 [2024-10-07 05:35:02.284741] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:15:58.407 [2024-10-07 05:35:02.284787] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:58.407 [2024-10-07 05:35:02.285002] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:58.407 [2024-10-07 05:35:02.285484] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:15:58.407 [2024-10-07 05:35:02.285608] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:15:58.407 [2024-10-07 05:35:02.285965] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.407 BaseBdev3 00:15:58.407 05:35:02 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:58.407 05:35:02 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:15:58.408 05:35:02 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:58.408 05:35:02 -- common/autotest_common.sh@889 -- # local i 00:15:58.408 05:35:02 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:58.408 05:35:02 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:58.408 05:35:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:58.665 05:35:02 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:58.923 [ 00:15:58.923 { 00:15:58.923 "name": "BaseBdev3", 00:15:58.923 "aliases": [ 00:15:58.923 "199a52e0-28bd-44d2-b0fb-15ae4c5d09d3" 00:15:58.923 ], 00:15:58.923 "product_name": "Malloc disk", 00:15:58.923 "block_size": 512, 00:15:58.923 "num_blocks": 65536, 00:15:58.923 "uuid": "199a52e0-28bd-44d2-b0fb-15ae4c5d09d3", 00:15:58.923 "assigned_rate_limits": { 00:15:58.923 "rw_ios_per_sec": 0, 00:15:58.923 "rw_mbytes_per_sec": 0, 00:15:58.923 "r_mbytes_per_sec": 0, 00:15:58.923 "w_mbytes_per_sec": 0 00:15:58.923 }, 00:15:58.923 "claimed": true, 00:15:58.923 "claim_type": "exclusive_write", 00:15:58.923 "zoned": false, 00:15:58.923 "supported_io_types": { 00:15:58.923 "read": true, 00:15:58.923 "write": true, 00:15:58.923 "unmap": true, 00:15:58.923 "write_zeroes": true, 00:15:58.923 "flush": true, 00:15:58.923 "reset": true, 00:15:58.923 "compare": false, 00:15:58.923 "compare_and_write": false, 00:15:58.923 "abort": true, 00:15:58.923 "nvme_admin": false, 00:15:58.923 "nvme_io": false 00:15:58.923 }, 00:15:58.923 "memory_domains": [ 00:15:58.923 { 00:15:58.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.923 "dma_device_type": 2 00:15:58.923 } 00:15:58.923 ], 00:15:58.923 "driver_specific": {} 00:15:58.923 } 00:15:58.923 ] 00:15:58.923 05:35:02 -- common/autotest_common.sh@895 -- # return 0 00:15:58.923 05:35:02 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:58.923 05:35:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:58.924 05:35:02 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:58.924 05:35:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:58.924 05:35:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:58.924 05:35:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:58.924 05:35:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:58.924 05:35:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:58.924 05:35:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:58.924 05:35:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:58.924 05:35:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:58.924 05:35:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:58.924 05:35:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.924 05:35:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.182 05:35:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:59.182 "name": "Existed_Raid", 00:15:59.182 "uuid": "989ef636-edfb-4d2b-9451-915ec557a244", 00:15:59.182 "strip_size_kb": 64, 00:15:59.182 "state": "online", 00:15:59.182 "raid_level": "raid0", 00:15:59.182 "superblock": false, 00:15:59.182 "num_base_bdevs": 3, 00:15:59.182 "num_base_bdevs_discovered": 3, 00:15:59.182 "num_base_bdevs_operational": 3, 00:15:59.182 "base_bdevs_list": [ 00:15:59.182 { 00:15:59.182 "name": "BaseBdev1", 00:15:59.182 "uuid": "a85cc75c-1507-4343-8a1e-771a163f878c", 00:15:59.182 "is_configured": true, 00:15:59.182 "data_offset": 0, 00:15:59.182 "data_size": 65536 00:15:59.182 }, 00:15:59.182 { 00:15:59.182 "name": "BaseBdev2", 00:15:59.182 "uuid": "d199ccfc-b4d7-4272-8e3b-ab16bf713075", 00:15:59.182 "is_configured": true, 00:15:59.182 "data_offset": 0, 00:15:59.182 "data_size": 65536 00:15:59.182 }, 00:15:59.182 { 00:15:59.182 "name": "BaseBdev3", 00:15:59.182 "uuid": "199a52e0-28bd-44d2-b0fb-15ae4c5d09d3", 00:15:59.182 "is_configured": true, 00:15:59.182 "data_offset": 0, 00:15:59.182 "data_size": 65536 00:15:59.182 } 00:15:59.182 ] 00:15:59.182 }' 00:15:59.182 05:35:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:59.182 05:35:03 -- common/autotest_common.sh@10 -- # set +x 00:15:59.749 05:35:03 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:00.009 [2024-10-07 05:35:03.869018] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:00.009 [2024-10-07 05:35:03.869326] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:00.009 [2024-10-07 05:35:03.869505] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.009 05:35:03 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:00.009 05:35:03 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:00.009 05:35:03 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:00.009 05:35:03 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:00.009 05:35:03 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:00.009 05:35:03 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:16:00.009 05:35:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:00.009 05:35:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:00.009 05:35:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:00.009 05:35:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:00.009 05:35:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:00.009 05:35:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:00.009 05:35:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:00.009 05:35:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:00.009 05:35:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:00.009 05:35:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.009 05:35:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.268 05:35:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:00.268 "name": "Existed_Raid", 00:16:00.268 "uuid": "989ef636-edfb-4d2b-9451-915ec557a244", 00:16:00.268 "strip_size_kb": 64, 00:16:00.268 "state": "offline", 00:16:00.268 "raid_level": "raid0", 00:16:00.268 "superblock": false, 00:16:00.268 "num_base_bdevs": 3, 00:16:00.268 "num_base_bdevs_discovered": 2, 00:16:00.268 "num_base_bdevs_operational": 2, 00:16:00.268 "base_bdevs_list": [ 00:16:00.268 { 00:16:00.268 "name": null, 00:16:00.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.268 "is_configured": false, 00:16:00.268 "data_offset": 0, 00:16:00.268 "data_size": 65536 00:16:00.268 }, 00:16:00.268 { 00:16:00.268 "name": "BaseBdev2", 00:16:00.268 "uuid": "d199ccfc-b4d7-4272-8e3b-ab16bf713075", 00:16:00.268 "is_configured": true, 00:16:00.268 "data_offset": 0, 00:16:00.268 "data_size": 65536 00:16:00.268 }, 00:16:00.268 { 00:16:00.268 "name": "BaseBdev3", 00:16:00.268 "uuid": "199a52e0-28bd-44d2-b0fb-15ae4c5d09d3", 00:16:00.268 "is_configured": true, 00:16:00.268 "data_offset": 0, 00:16:00.268 "data_size": 65536 00:16:00.268 } 00:16:00.268 ] 00:16:00.268 }' 00:16:00.268 05:35:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:00.268 05:35:04 -- common/autotest_common.sh@10 -- # set +x 00:16:00.835 05:35:04 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:00.835 05:35:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:00.835 05:35:04 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.835 05:35:04 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:01.094 05:35:04 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:01.094 05:35:04 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:01.094 05:35:04 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:01.353 [2024-10-07 05:35:05.138895] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:01.353 05:35:05 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:01.353 05:35:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:01.353 05:35:05 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.353 05:35:05 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:01.612 05:35:05 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:01.612 05:35:05 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:01.612 05:35:05 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:01.872 [2024-10-07 05:35:05.677034] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:01.872 [2024-10-07 05:35:05.677304] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:16:01.872 05:35:05 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:01.872 05:35:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:01.872 05:35:05 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.872 05:35:05 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:02.132 05:35:06 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:02.132 05:35:06 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:02.132 05:35:06 -- bdev/bdev_raid.sh@287 -- # killprocess 141150 00:16:02.132 05:35:06 -- common/autotest_common.sh@926 -- # '[' -z 141150 ']' 00:16:02.132 05:35:06 -- common/autotest_common.sh@930 -- # kill -0 141150 00:16:02.132 05:35:06 -- common/autotest_common.sh@931 -- # uname 00:16:02.132 05:35:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:02.132 05:35:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 141150 00:16:02.132 killing process with pid 141150 00:16:02.132 05:35:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:02.132 05:35:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:02.132 05:35:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 141150' 00:16:02.132 05:35:06 -- common/autotest_common.sh@945 -- # kill 141150 00:16:02.132 05:35:06 -- common/autotest_common.sh@950 -- # wait 141150 00:16:02.132 [2024-10-07 05:35:06.043383] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:02.132 [2024-10-07 05:35:06.043522] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:03.509 ************************************ 00:16:03.509 END TEST raid_state_function_test 00:16:03.509 ************************************ 00:16:03.509 05:35:07 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:03.509 00:16:03.509 real 0m12.299s 00:16:03.509 user 0m21.521s 00:16:03.509 sys 0m1.544s 00:16:03.509 05:35:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:03.509 05:35:07 -- common/autotest_common.sh@10 -- # set +x 00:16:03.509 05:35:07 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:16:03.509 05:35:07 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:03.509 05:35:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:03.509 05:35:07 -- common/autotest_common.sh@10 -- # set +x 00:16:03.509 ************************************ 00:16:03.509 START TEST raid_state_function_test_sb 00:16:03.509 ************************************ 00:16:03.509 05:35:07 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 true 00:16:03.509 05:35:07 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:03.509 05:35:07 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:03.509 05:35:07 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:03.509 05:35:07 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:03.509 05:35:07 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:03.509 05:35:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:03.509 05:35:07 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:03.509 05:35:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:03.509 05:35:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:03.509 05:35:07 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:03.509 05:35:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:03.509 05:35:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:03.509 05:35:07 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:03.509 05:35:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:03.509 05:35:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:03.509 05:35:07 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:03.509 05:35:07 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:03.509 05:35:07 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:03.509 05:35:07 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:03.510 05:35:07 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:03.510 05:35:07 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:03.510 05:35:07 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:03.510 05:35:07 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:03.510 05:35:07 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:03.510 05:35:07 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:03.510 05:35:07 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:03.510 05:35:07 -- bdev/bdev_raid.sh@226 -- # raid_pid=141935 00:16:03.510 05:35:07 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:03.510 Process raid pid: 141935 00:16:03.510 05:35:07 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 141935' 00:16:03.510 05:35:07 -- bdev/bdev_raid.sh@228 -- # waitforlisten 141935 /var/tmp/spdk-raid.sock 00:16:03.510 05:35:07 -- common/autotest_common.sh@819 -- # '[' -z 141935 ']' 00:16:03.510 05:35:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:03.510 05:35:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:03.510 05:35:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:03.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:03.510 05:35:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:03.510 05:35:07 -- common/autotest_common.sh@10 -- # set +x 00:16:03.510 [2024-10-07 05:35:07.222096] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:03.510 [2024-10-07 05:35:07.222417] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.510 [2024-10-07 05:35:07.374783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.768 [2024-10-07 05:35:07.584062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.028 [2024-10-07 05:35:07.780933] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:04.287 05:35:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:04.287 05:35:08 -- common/autotest_common.sh@852 -- # return 0 00:16:04.287 05:35:08 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:04.546 [2024-10-07 05:35:08.424943] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:04.546 [2024-10-07 05:35:08.425163] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:04.546 [2024-10-07 05:35:08.425292] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:04.546 [2024-10-07 05:35:08.425362] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:04.546 [2024-10-07 05:35:08.425637] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:04.546 [2024-10-07 05:35:08.425726] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:04.546 05:35:08 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:04.546 05:35:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:04.546 05:35:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:04.546 05:35:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:04.546 05:35:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:04.546 05:35:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:04.546 05:35:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:04.546 05:35:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:04.546 05:35:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:04.546 05:35:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:04.546 05:35:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.546 05:35:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.805 05:35:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:04.805 "name": "Existed_Raid", 00:16:04.805 "uuid": "893feba0-291d-4a4c-8bd8-e003deb01b17", 00:16:04.805 "strip_size_kb": 64, 00:16:04.805 "state": "configuring", 00:16:04.805 "raid_level": "raid0", 00:16:04.805 "superblock": true, 00:16:04.805 "num_base_bdevs": 3, 00:16:04.805 "num_base_bdevs_discovered": 0, 00:16:04.805 "num_base_bdevs_operational": 3, 00:16:04.805 "base_bdevs_list": [ 00:16:04.805 { 00:16:04.805 "name": "BaseBdev1", 00:16:04.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.805 "is_configured": false, 00:16:04.805 "data_offset": 0, 00:16:04.805 "data_size": 0 00:16:04.805 }, 00:16:04.805 { 00:16:04.805 "name": "BaseBdev2", 00:16:04.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.805 "is_configured": false, 00:16:04.805 "data_offset": 0, 00:16:04.805 "data_size": 0 00:16:04.805 }, 00:16:04.805 { 00:16:04.805 "name": "BaseBdev3", 00:16:04.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.805 "is_configured": false, 00:16:04.805 "data_offset": 0, 00:16:04.805 "data_size": 0 00:16:04.805 } 00:16:04.805 ] 00:16:04.805 }' 00:16:04.805 05:35:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:04.805 05:35:08 -- common/autotest_common.sh@10 -- # set +x 00:16:05.371 05:35:09 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:05.630 [2024-10-07 05:35:09.469046] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:05.630 [2024-10-07 05:35:09.469220] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:16:05.630 05:35:09 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:05.889 [2024-10-07 05:35:09.733135] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:05.889 [2024-10-07 05:35:09.733315] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:05.889 [2024-10-07 05:35:09.733421] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:05.889 [2024-10-07 05:35:09.733493] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:05.889 [2024-10-07 05:35:09.733635] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:05.889 [2024-10-07 05:35:09.733704] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:05.889 05:35:09 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:06.148 [2024-10-07 05:35:09.950575] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:06.148 BaseBdev1 00:16:06.148 05:35:09 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:06.148 05:35:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:06.148 05:35:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:06.148 05:35:09 -- common/autotest_common.sh@889 -- # local i 00:16:06.148 05:35:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:06.148 05:35:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:06.148 05:35:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:06.407 05:35:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:06.666 [ 00:16:06.666 { 00:16:06.666 "name": "BaseBdev1", 00:16:06.666 "aliases": [ 00:16:06.666 "3844f788-f7c7-4fc5-ae84-1cb9c622c0c2" 00:16:06.666 ], 00:16:06.666 "product_name": "Malloc disk", 00:16:06.666 "block_size": 512, 00:16:06.666 "num_blocks": 65536, 00:16:06.666 "uuid": "3844f788-f7c7-4fc5-ae84-1cb9c622c0c2", 00:16:06.666 "assigned_rate_limits": { 00:16:06.666 "rw_ios_per_sec": 0, 00:16:06.666 "rw_mbytes_per_sec": 0, 00:16:06.666 "r_mbytes_per_sec": 0, 00:16:06.666 "w_mbytes_per_sec": 0 00:16:06.666 }, 00:16:06.666 "claimed": true, 00:16:06.666 "claim_type": "exclusive_write", 00:16:06.666 "zoned": false, 00:16:06.666 "supported_io_types": { 00:16:06.666 "read": true, 00:16:06.666 "write": true, 00:16:06.666 "unmap": true, 00:16:06.666 "write_zeroes": true, 00:16:06.666 "flush": true, 00:16:06.666 "reset": true, 00:16:06.666 "compare": false, 00:16:06.666 "compare_and_write": false, 00:16:06.666 "abort": true, 00:16:06.666 "nvme_admin": false, 00:16:06.666 "nvme_io": false 00:16:06.666 }, 00:16:06.666 "memory_domains": [ 00:16:06.666 { 00:16:06.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.666 "dma_device_type": 2 00:16:06.666 } 00:16:06.666 ], 00:16:06.666 "driver_specific": {} 00:16:06.666 } 00:16:06.666 ] 00:16:06.666 05:35:10 -- common/autotest_common.sh@895 -- # return 0 00:16:06.666 05:35:10 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:06.666 05:35:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:06.666 05:35:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:06.666 05:35:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:06.666 05:35:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:06.666 05:35:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:06.666 05:35:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:06.666 05:35:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:06.666 05:35:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:06.666 05:35:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:06.666 05:35:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.666 05:35:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.924 05:35:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:06.924 "name": "Existed_Raid", 00:16:06.924 "uuid": "7fa2e1f0-68b7-4f5c-813f-c2c8b70f618f", 00:16:06.924 "strip_size_kb": 64, 00:16:06.924 "state": "configuring", 00:16:06.924 "raid_level": "raid0", 00:16:06.924 "superblock": true, 00:16:06.924 "num_base_bdevs": 3, 00:16:06.924 "num_base_bdevs_discovered": 1, 00:16:06.924 "num_base_bdevs_operational": 3, 00:16:06.924 "base_bdevs_list": [ 00:16:06.924 { 00:16:06.924 "name": "BaseBdev1", 00:16:06.924 "uuid": "3844f788-f7c7-4fc5-ae84-1cb9c622c0c2", 00:16:06.924 "is_configured": true, 00:16:06.924 "data_offset": 2048, 00:16:06.924 "data_size": 63488 00:16:06.924 }, 00:16:06.924 { 00:16:06.924 "name": "BaseBdev2", 00:16:06.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.924 "is_configured": false, 00:16:06.924 "data_offset": 0, 00:16:06.924 "data_size": 0 00:16:06.924 }, 00:16:06.924 { 00:16:06.924 "name": "BaseBdev3", 00:16:06.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.924 "is_configured": false, 00:16:06.924 "data_offset": 0, 00:16:06.924 "data_size": 0 00:16:06.924 } 00:16:06.924 ] 00:16:06.924 }' 00:16:06.924 05:35:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:06.924 05:35:10 -- common/autotest_common.sh@10 -- # set +x 00:16:07.491 05:35:11 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:07.750 [2024-10-07 05:35:11.479277] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:07.750 [2024-10-07 05:35:11.479567] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:07.750 05:35:11 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:07.750 05:35:11 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:08.009 05:35:11 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:08.267 BaseBdev1 00:16:08.267 05:35:12 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:08.267 05:35:12 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:08.267 05:35:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:08.267 05:35:12 -- common/autotest_common.sh@889 -- # local i 00:16:08.267 05:35:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:08.267 05:35:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:08.267 05:35:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:08.525 05:35:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:08.525 [ 00:16:08.525 { 00:16:08.525 "name": "BaseBdev1", 00:16:08.525 "aliases": [ 00:16:08.525 "3d7fab6c-b361-4cb7-a7ab-2c28229201b5" 00:16:08.525 ], 00:16:08.525 "product_name": "Malloc disk", 00:16:08.525 "block_size": 512, 00:16:08.525 "num_blocks": 65536, 00:16:08.525 "uuid": "3d7fab6c-b361-4cb7-a7ab-2c28229201b5", 00:16:08.525 "assigned_rate_limits": { 00:16:08.525 "rw_ios_per_sec": 0, 00:16:08.525 "rw_mbytes_per_sec": 0, 00:16:08.525 "r_mbytes_per_sec": 0, 00:16:08.525 "w_mbytes_per_sec": 0 00:16:08.525 }, 00:16:08.525 "claimed": false, 00:16:08.525 "zoned": false, 00:16:08.525 "supported_io_types": { 00:16:08.525 "read": true, 00:16:08.525 "write": true, 00:16:08.525 "unmap": true, 00:16:08.526 "write_zeroes": true, 00:16:08.526 "flush": true, 00:16:08.526 "reset": true, 00:16:08.526 "compare": false, 00:16:08.526 "compare_and_write": false, 00:16:08.526 "abort": true, 00:16:08.526 "nvme_admin": false, 00:16:08.526 "nvme_io": false 00:16:08.526 }, 00:16:08.526 "memory_domains": [ 00:16:08.526 { 00:16:08.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.526 "dma_device_type": 2 00:16:08.526 } 00:16:08.526 ], 00:16:08.526 "driver_specific": {} 00:16:08.526 } 00:16:08.526 ] 00:16:08.784 05:35:12 -- common/autotest_common.sh@895 -- # return 0 00:16:08.784 05:35:12 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:08.784 [2024-10-07 05:35:12.740765] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:08.784 [2024-10-07 05:35:12.742989] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:08.784 [2024-10-07 05:35:12.743175] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:08.784 [2024-10-07 05:35:12.743283] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:08.784 [2024-10-07 05:35:12.743352] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:08.784 05:35:12 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:08.784 05:35:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:08.784 05:35:12 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:08.784 05:35:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:08.784 05:35:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:08.784 05:35:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:08.784 05:35:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:08.784 05:35:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:08.784 05:35:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:08.784 05:35:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:08.784 05:35:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:08.784 05:35:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:09.044 05:35:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.044 05:35:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.044 05:35:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:09.044 "name": "Existed_Raid", 00:16:09.044 "uuid": "1088bdca-cd3e-4b12-a0f5-488456453aae", 00:16:09.044 "strip_size_kb": 64, 00:16:09.044 "state": "configuring", 00:16:09.044 "raid_level": "raid0", 00:16:09.044 "superblock": true, 00:16:09.044 "num_base_bdevs": 3, 00:16:09.044 "num_base_bdevs_discovered": 1, 00:16:09.044 "num_base_bdevs_operational": 3, 00:16:09.044 "base_bdevs_list": [ 00:16:09.044 { 00:16:09.044 "name": "BaseBdev1", 00:16:09.044 "uuid": "3d7fab6c-b361-4cb7-a7ab-2c28229201b5", 00:16:09.044 "is_configured": true, 00:16:09.044 "data_offset": 2048, 00:16:09.044 "data_size": 63488 00:16:09.044 }, 00:16:09.044 { 00:16:09.044 "name": "BaseBdev2", 00:16:09.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.044 "is_configured": false, 00:16:09.044 "data_offset": 0, 00:16:09.044 "data_size": 0 00:16:09.044 }, 00:16:09.044 { 00:16:09.044 "name": "BaseBdev3", 00:16:09.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.044 "is_configured": false, 00:16:09.044 "data_offset": 0, 00:16:09.044 "data_size": 0 00:16:09.044 } 00:16:09.044 ] 00:16:09.044 }' 00:16:09.044 05:35:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:09.044 05:35:12 -- common/autotest_common.sh@10 -- # set +x 00:16:09.611 05:35:13 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:10.181 [2024-10-07 05:35:13.873665] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:10.181 BaseBdev2 00:16:10.181 05:35:13 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:10.181 05:35:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:10.181 05:35:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:10.181 05:35:13 -- common/autotest_common.sh@889 -- # local i 00:16:10.181 05:35:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:10.181 05:35:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:10.181 05:35:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:10.181 05:35:14 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:10.450 [ 00:16:10.450 { 00:16:10.450 "name": "BaseBdev2", 00:16:10.450 "aliases": [ 00:16:10.450 "8c61aade-2420-44d7-8a2e-9a34fdbf2d36" 00:16:10.450 ], 00:16:10.450 "product_name": "Malloc disk", 00:16:10.450 "block_size": 512, 00:16:10.450 "num_blocks": 65536, 00:16:10.450 "uuid": "8c61aade-2420-44d7-8a2e-9a34fdbf2d36", 00:16:10.450 "assigned_rate_limits": { 00:16:10.450 "rw_ios_per_sec": 0, 00:16:10.450 "rw_mbytes_per_sec": 0, 00:16:10.450 "r_mbytes_per_sec": 0, 00:16:10.450 "w_mbytes_per_sec": 0 00:16:10.450 }, 00:16:10.450 "claimed": true, 00:16:10.450 "claim_type": "exclusive_write", 00:16:10.450 "zoned": false, 00:16:10.450 "supported_io_types": { 00:16:10.450 "read": true, 00:16:10.450 "write": true, 00:16:10.450 "unmap": true, 00:16:10.450 "write_zeroes": true, 00:16:10.450 "flush": true, 00:16:10.450 "reset": true, 00:16:10.450 "compare": false, 00:16:10.450 "compare_and_write": false, 00:16:10.450 "abort": true, 00:16:10.450 "nvme_admin": false, 00:16:10.451 "nvme_io": false 00:16:10.451 }, 00:16:10.451 "memory_domains": [ 00:16:10.451 { 00:16:10.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.451 "dma_device_type": 2 00:16:10.451 } 00:16:10.451 ], 00:16:10.451 "driver_specific": {} 00:16:10.451 } 00:16:10.451 ] 00:16:10.451 05:35:14 -- common/autotest_common.sh@895 -- # return 0 00:16:10.451 05:35:14 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:10.451 05:35:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:10.451 05:35:14 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:10.451 05:35:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:10.451 05:35:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:10.451 05:35:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:10.451 05:35:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:10.451 05:35:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:10.451 05:35:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:10.451 05:35:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:10.451 05:35:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:10.451 05:35:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:10.451 05:35:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.451 05:35:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.713 05:35:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:10.713 "name": "Existed_Raid", 00:16:10.713 "uuid": "1088bdca-cd3e-4b12-a0f5-488456453aae", 00:16:10.713 "strip_size_kb": 64, 00:16:10.713 "state": "configuring", 00:16:10.713 "raid_level": "raid0", 00:16:10.713 "superblock": true, 00:16:10.713 "num_base_bdevs": 3, 00:16:10.713 "num_base_bdevs_discovered": 2, 00:16:10.713 "num_base_bdevs_operational": 3, 00:16:10.713 "base_bdevs_list": [ 00:16:10.713 { 00:16:10.713 "name": "BaseBdev1", 00:16:10.713 "uuid": "3d7fab6c-b361-4cb7-a7ab-2c28229201b5", 00:16:10.713 "is_configured": true, 00:16:10.713 "data_offset": 2048, 00:16:10.713 "data_size": 63488 00:16:10.713 }, 00:16:10.713 { 00:16:10.713 "name": "BaseBdev2", 00:16:10.713 "uuid": "8c61aade-2420-44d7-8a2e-9a34fdbf2d36", 00:16:10.713 "is_configured": true, 00:16:10.713 "data_offset": 2048, 00:16:10.713 "data_size": 63488 00:16:10.713 }, 00:16:10.713 { 00:16:10.713 "name": "BaseBdev3", 00:16:10.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.713 "is_configured": false, 00:16:10.713 "data_offset": 0, 00:16:10.714 "data_size": 0 00:16:10.714 } 00:16:10.714 ] 00:16:10.714 }' 00:16:10.714 05:35:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:10.714 05:35:14 -- common/autotest_common.sh@10 -- # set +x 00:16:11.280 05:35:15 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:11.846 [2024-10-07 05:35:15.523648] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:11.846 [2024-10-07 05:35:15.524162] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:16:11.846 [2024-10-07 05:35:15.524291] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:11.846 [2024-10-07 05:35:15.524475] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:16:11.846 [2024-10-07 05:35:15.524916] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:16:11.846 BaseBdev3 00:16:11.846 [2024-10-07 05:35:15.525107] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:16:11.846 [2024-10-07 05:35:15.525377] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.846 05:35:15 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:11.846 05:35:15 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:11.846 05:35:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:11.846 05:35:15 -- common/autotest_common.sh@889 -- # local i 00:16:11.846 05:35:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:11.846 05:35:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:11.846 05:35:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:11.846 05:35:15 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:12.104 [ 00:16:12.104 { 00:16:12.104 "name": "BaseBdev3", 00:16:12.104 "aliases": [ 00:16:12.105 "a9ccc5b1-02ee-430f-ae40-c277a693d7cf" 00:16:12.105 ], 00:16:12.105 "product_name": "Malloc disk", 00:16:12.105 "block_size": 512, 00:16:12.105 "num_blocks": 65536, 00:16:12.105 "uuid": "a9ccc5b1-02ee-430f-ae40-c277a693d7cf", 00:16:12.105 "assigned_rate_limits": { 00:16:12.105 "rw_ios_per_sec": 0, 00:16:12.105 "rw_mbytes_per_sec": 0, 00:16:12.105 "r_mbytes_per_sec": 0, 00:16:12.105 "w_mbytes_per_sec": 0 00:16:12.105 }, 00:16:12.105 "claimed": true, 00:16:12.105 "claim_type": "exclusive_write", 00:16:12.105 "zoned": false, 00:16:12.105 "supported_io_types": { 00:16:12.105 "read": true, 00:16:12.105 "write": true, 00:16:12.105 "unmap": true, 00:16:12.105 "write_zeroes": true, 00:16:12.105 "flush": true, 00:16:12.105 "reset": true, 00:16:12.105 "compare": false, 00:16:12.105 "compare_and_write": false, 00:16:12.105 "abort": true, 00:16:12.105 "nvme_admin": false, 00:16:12.105 "nvme_io": false 00:16:12.105 }, 00:16:12.105 "memory_domains": [ 00:16:12.105 { 00:16:12.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.105 "dma_device_type": 2 00:16:12.105 } 00:16:12.105 ], 00:16:12.105 "driver_specific": {} 00:16:12.105 } 00:16:12.105 ] 00:16:12.105 05:35:16 -- common/autotest_common.sh@895 -- # return 0 00:16:12.105 05:35:16 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:12.105 05:35:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:12.105 05:35:16 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:12.105 05:35:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:12.105 05:35:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:12.105 05:35:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:12.105 05:35:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:12.105 05:35:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:12.105 05:35:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:12.105 05:35:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:12.105 05:35:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:12.105 05:35:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:12.105 05:35:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.105 05:35:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.363 05:35:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:12.363 "name": "Existed_Raid", 00:16:12.363 "uuid": "1088bdca-cd3e-4b12-a0f5-488456453aae", 00:16:12.363 "strip_size_kb": 64, 00:16:12.363 "state": "online", 00:16:12.363 "raid_level": "raid0", 00:16:12.363 "superblock": true, 00:16:12.363 "num_base_bdevs": 3, 00:16:12.363 "num_base_bdevs_discovered": 3, 00:16:12.363 "num_base_bdevs_operational": 3, 00:16:12.363 "base_bdevs_list": [ 00:16:12.363 { 00:16:12.363 "name": "BaseBdev1", 00:16:12.363 "uuid": "3d7fab6c-b361-4cb7-a7ab-2c28229201b5", 00:16:12.363 "is_configured": true, 00:16:12.363 "data_offset": 2048, 00:16:12.363 "data_size": 63488 00:16:12.363 }, 00:16:12.363 { 00:16:12.363 "name": "BaseBdev2", 00:16:12.363 "uuid": "8c61aade-2420-44d7-8a2e-9a34fdbf2d36", 00:16:12.363 "is_configured": true, 00:16:12.363 "data_offset": 2048, 00:16:12.363 "data_size": 63488 00:16:12.363 }, 00:16:12.363 { 00:16:12.363 "name": "BaseBdev3", 00:16:12.363 "uuid": "a9ccc5b1-02ee-430f-ae40-c277a693d7cf", 00:16:12.363 "is_configured": true, 00:16:12.363 "data_offset": 2048, 00:16:12.363 "data_size": 63488 00:16:12.363 } 00:16:12.363 ] 00:16:12.363 }' 00:16:12.363 05:35:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:12.363 05:35:16 -- common/autotest_common.sh@10 -- # set +x 00:16:12.930 05:35:16 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:13.188 [2024-10-07 05:35:17.080007] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:13.188 [2024-10-07 05:35:17.080186] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:13.188 [2024-10-07 05:35:17.080356] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:13.447 05:35:17 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:13.447 05:35:17 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:13.447 05:35:17 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:13.447 05:35:17 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:13.447 05:35:17 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:13.447 05:35:17 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:16:13.447 05:35:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:13.447 05:35:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:13.447 05:35:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:13.447 05:35:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:13.447 05:35:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:13.447 05:35:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:13.447 05:35:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:13.447 05:35:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:13.447 05:35:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:13.447 05:35:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.447 05:35:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.707 05:35:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:13.707 "name": "Existed_Raid", 00:16:13.707 "uuid": "1088bdca-cd3e-4b12-a0f5-488456453aae", 00:16:13.707 "strip_size_kb": 64, 00:16:13.707 "state": "offline", 00:16:13.707 "raid_level": "raid0", 00:16:13.707 "superblock": true, 00:16:13.707 "num_base_bdevs": 3, 00:16:13.707 "num_base_bdevs_discovered": 2, 00:16:13.707 "num_base_bdevs_operational": 2, 00:16:13.707 "base_bdevs_list": [ 00:16:13.707 { 00:16:13.707 "name": null, 00:16:13.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.707 "is_configured": false, 00:16:13.707 "data_offset": 2048, 00:16:13.707 "data_size": 63488 00:16:13.707 }, 00:16:13.707 { 00:16:13.707 "name": "BaseBdev2", 00:16:13.707 "uuid": "8c61aade-2420-44d7-8a2e-9a34fdbf2d36", 00:16:13.707 "is_configured": true, 00:16:13.707 "data_offset": 2048, 00:16:13.707 "data_size": 63488 00:16:13.707 }, 00:16:13.707 { 00:16:13.707 "name": "BaseBdev3", 00:16:13.707 "uuid": "a9ccc5b1-02ee-430f-ae40-c277a693d7cf", 00:16:13.707 "is_configured": true, 00:16:13.707 "data_offset": 2048, 00:16:13.707 "data_size": 63488 00:16:13.707 } 00:16:13.707 ] 00:16:13.707 }' 00:16:13.707 05:35:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:13.707 05:35:17 -- common/autotest_common.sh@10 -- # set +x 00:16:14.275 05:35:18 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:14.275 05:35:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:14.275 05:35:18 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.275 05:35:18 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:14.540 05:35:18 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:14.540 05:35:18 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:14.540 05:35:18 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:14.540 [2024-10-07 05:35:18.438630] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:14.800 05:35:18 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:14.800 05:35:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:14.800 05:35:18 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.800 05:35:18 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:15.058 05:35:18 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:15.058 05:35:18 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:15.058 05:35:18 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:15.058 [2024-10-07 05:35:19.036177] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:15.058 [2024-10-07 05:35:19.036457] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:16:15.317 05:35:19 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:15.317 05:35:19 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:15.317 05:35:19 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.317 05:35:19 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:15.576 05:35:19 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:15.576 05:35:19 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:15.576 05:35:19 -- bdev/bdev_raid.sh@287 -- # killprocess 141935 00:16:15.576 05:35:19 -- common/autotest_common.sh@926 -- # '[' -z 141935 ']' 00:16:15.576 05:35:19 -- common/autotest_common.sh@930 -- # kill -0 141935 00:16:15.576 05:35:19 -- common/autotest_common.sh@931 -- # uname 00:16:15.576 05:35:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:15.576 05:35:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 141935 00:16:15.576 05:35:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:15.576 05:35:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:15.576 05:35:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 141935' 00:16:15.576 killing process with pid 141935 00:16:15.576 05:35:19 -- common/autotest_common.sh@945 -- # kill 141935 00:16:15.576 05:35:19 -- common/autotest_common.sh@950 -- # wait 141935 00:16:15.576 [2024-10-07 05:35:19.397834] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:15.576 [2024-10-07 05:35:19.397953] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:16.513 ************************************ 00:16:16.513 END TEST raid_state_function_test_sb 00:16:16.513 ************************************ 00:16:16.513 05:35:20 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:16.513 00:16:16.513 real 0m13.266s 00:16:16.513 user 0m23.053s 00:16:16.513 sys 0m1.826s 00:16:16.513 05:35:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:16.513 05:35:20 -- common/autotest_common.sh@10 -- # set +x 00:16:16.513 05:35:20 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:16:16.513 05:35:20 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:16.513 05:35:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:16.513 05:35:20 -- common/autotest_common.sh@10 -- # set +x 00:16:16.772 ************************************ 00:16:16.772 START TEST raid_superblock_test 00:16:16.772 ************************************ 00:16:16.772 05:35:20 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 3 00:16:16.772 05:35:20 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:16:16.772 05:35:20 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:16:16.772 05:35:20 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:16.772 05:35:20 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:16.772 05:35:20 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:16.772 05:35:20 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:16.772 05:35:20 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:16.772 05:35:20 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:16.772 05:35:20 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:16.772 05:35:20 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:16.772 05:35:20 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:16.772 05:35:20 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:16.772 05:35:20 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:16.772 05:35:20 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:16:16.772 05:35:20 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:16:16.772 05:35:20 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:16:16.772 05:35:20 -- bdev/bdev_raid.sh@357 -- # raid_pid=142921 00:16:16.772 05:35:20 -- bdev/bdev_raid.sh@358 -- # waitforlisten 142921 /var/tmp/spdk-raid.sock 00:16:16.772 05:35:20 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:16.772 05:35:20 -- common/autotest_common.sh@819 -- # '[' -z 142921 ']' 00:16:16.772 05:35:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:16.772 05:35:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:16.772 05:35:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:16.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:16.772 05:35:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:16.772 05:35:20 -- common/autotest_common.sh@10 -- # set +x 00:16:16.772 [2024-10-07 05:35:20.551944] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:16.772 [2024-10-07 05:35:20.552097] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142921 ] 00:16:16.772 [2024-10-07 05:35:20.705148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.031 [2024-10-07 05:35:20.887847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.289 [2024-10-07 05:35:21.073762] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:17.548 05:35:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:17.548 05:35:21 -- common/autotest_common.sh@852 -- # return 0 00:16:17.548 05:35:21 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:17.548 05:35:21 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:17.548 05:35:21 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:17.548 05:35:21 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:17.548 05:35:21 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:17.548 05:35:21 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:17.548 05:35:21 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:17.548 05:35:21 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:17.548 05:35:21 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:17.806 malloc1 00:16:17.806 05:35:21 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:18.066 [2024-10-07 05:35:21.990637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:18.066 [2024-10-07 05:35:21.990741] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.066 [2024-10-07 05:35:21.990785] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:16:18.066 [2024-10-07 05:35:21.990841] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.066 [2024-10-07 05:35:21.993200] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.066 [2024-10-07 05:35:21.993254] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:18.066 pt1 00:16:18.066 05:35:22 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:18.066 05:35:22 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:18.066 05:35:22 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:18.066 05:35:22 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:18.066 05:35:22 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:18.066 05:35:22 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:18.066 05:35:22 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:18.066 05:35:22 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:18.066 05:35:22 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:18.634 malloc2 00:16:18.634 05:35:22 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:18.634 [2024-10-07 05:35:22.535190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:18.634 [2024-10-07 05:35:22.535282] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.634 [2024-10-07 05:35:22.535326] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:18.634 [2024-10-07 05:35:22.535380] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.634 [2024-10-07 05:35:22.537662] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.634 [2024-10-07 05:35:22.537710] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:18.634 pt2 00:16:18.634 05:35:22 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:18.634 05:35:22 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:18.634 05:35:22 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:16:18.634 05:35:22 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:16:18.634 05:35:22 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:18.634 05:35:22 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:18.634 05:35:22 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:18.634 05:35:22 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:18.634 05:35:22 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:18.893 malloc3 00:16:18.893 05:35:22 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:19.153 [2024-10-07 05:35:23.025409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:19.153 [2024-10-07 05:35:23.025488] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.153 [2024-10-07 05:35:23.025534] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:16:19.153 [2024-10-07 05:35:23.025580] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.153 pt3 00:16:19.153 [2024-10-07 05:35:23.027895] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.153 [2024-10-07 05:35:23.027948] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:19.153 05:35:23 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:19.153 05:35:23 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:19.153 05:35:23 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:16:19.411 [2024-10-07 05:35:23.281483] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:19.412 [2024-10-07 05:35:23.283550] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:19.412 [2024-10-07 05:35:23.283621] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:19.412 [2024-10-07 05:35:23.283805] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:16:19.412 [2024-10-07 05:35:23.283826] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:19.412 [2024-10-07 05:35:23.283937] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:16:19.412 [2024-10-07 05:35:23.284271] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:16:19.412 [2024-10-07 05:35:23.284291] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:16:19.412 [2024-10-07 05:35:23.284432] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.412 05:35:23 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:19.412 05:35:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:19.412 05:35:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:19.412 05:35:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:19.412 05:35:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:19.412 05:35:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:19.412 05:35:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:19.412 05:35:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:19.412 05:35:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:19.412 05:35:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:19.412 05:35:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.412 05:35:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.670 05:35:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:19.670 "name": "raid_bdev1", 00:16:19.670 "uuid": "18e95e9d-0ab9-400e-ac1a-d5e2675884cc", 00:16:19.670 "strip_size_kb": 64, 00:16:19.670 "state": "online", 00:16:19.670 "raid_level": "raid0", 00:16:19.670 "superblock": true, 00:16:19.670 "num_base_bdevs": 3, 00:16:19.670 "num_base_bdevs_discovered": 3, 00:16:19.670 "num_base_bdevs_operational": 3, 00:16:19.670 "base_bdevs_list": [ 00:16:19.670 { 00:16:19.670 "name": "pt1", 00:16:19.670 "uuid": "6e91ef4b-6618-5e56-a27f-e6e1e3978470", 00:16:19.670 "is_configured": true, 00:16:19.670 "data_offset": 2048, 00:16:19.670 "data_size": 63488 00:16:19.670 }, 00:16:19.670 { 00:16:19.670 "name": "pt2", 00:16:19.670 "uuid": "a0908c85-d7ec-521f-9123-3b6960d88c0d", 00:16:19.670 "is_configured": true, 00:16:19.670 "data_offset": 2048, 00:16:19.670 "data_size": 63488 00:16:19.670 }, 00:16:19.670 { 00:16:19.670 "name": "pt3", 00:16:19.670 "uuid": "16a5a5b0-1b2c-5ee3-9a04-9c8a03b0cfc5", 00:16:19.670 "is_configured": true, 00:16:19.670 "data_offset": 2048, 00:16:19.670 "data_size": 63488 00:16:19.670 } 00:16:19.670 ] 00:16:19.670 }' 00:16:19.670 05:35:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:19.670 05:35:23 -- common/autotest_common.sh@10 -- # set +x 00:16:20.238 05:35:24 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:20.238 05:35:24 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:20.497 [2024-10-07 05:35:24.333842] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:20.497 05:35:24 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=18e95e9d-0ab9-400e-ac1a-d5e2675884cc 00:16:20.497 05:35:24 -- bdev/bdev_raid.sh@380 -- # '[' -z 18e95e9d-0ab9-400e-ac1a-d5e2675884cc ']' 00:16:20.497 05:35:24 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:20.755 [2024-10-07 05:35:24.529663] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:20.755 [2024-10-07 05:35:24.529688] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:20.755 [2024-10-07 05:35:24.529774] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:20.755 [2024-10-07 05:35:24.529845] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:20.755 [2024-10-07 05:35:24.529857] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:16:20.755 05:35:24 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.755 05:35:24 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:21.013 05:35:24 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:21.014 05:35:24 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:21.014 05:35:24 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:21.014 05:35:24 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:21.272 05:35:25 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:21.272 05:35:25 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:21.530 05:35:25 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:21.530 05:35:25 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:21.788 05:35:25 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:21.788 05:35:25 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:22.047 05:35:25 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:22.047 05:35:25 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:22.047 05:35:25 -- common/autotest_common.sh@640 -- # local es=0 00:16:22.047 05:35:25 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:22.047 05:35:25 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.047 05:35:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:22.047 05:35:25 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.047 05:35:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:22.047 05:35:25 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.047 05:35:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:22.047 05:35:25 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.047 05:35:25 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:22.047 05:35:25 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:22.047 [2024-10-07 05:35:25.945881] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:22.047 [2024-10-07 05:35:25.947741] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:22.047 [2024-10-07 05:35:25.947796] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:22.047 [2024-10-07 05:35:25.947851] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:22.047 [2024-10-07 05:35:25.947934] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:22.047 [2024-10-07 05:35:25.947980] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:16:22.047 [2024-10-07 05:35:25.948029] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:22.047 [2024-10-07 05:35:25.948041] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring 00:16:22.047 request: 00:16:22.047 { 00:16:22.047 "name": "raid_bdev1", 00:16:22.047 "raid_level": "raid0", 00:16:22.047 "base_bdevs": [ 00:16:22.047 "malloc1", 00:16:22.047 "malloc2", 00:16:22.047 "malloc3" 00:16:22.047 ], 00:16:22.047 "superblock": false, 00:16:22.047 "strip_size_kb": 64, 00:16:22.047 "method": "bdev_raid_create", 00:16:22.047 "req_id": 1 00:16:22.047 } 00:16:22.047 Got JSON-RPC error response 00:16:22.047 response: 00:16:22.047 { 00:16:22.047 "code": -17, 00:16:22.047 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:22.047 } 00:16:22.047 05:35:25 -- common/autotest_common.sh@643 -- # es=1 00:16:22.047 05:35:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:22.047 05:35:25 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:22.047 05:35:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:22.047 05:35:25 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.047 05:35:25 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:22.307 05:35:26 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:22.307 05:35:26 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:22.307 05:35:26 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:22.566 [2024-10-07 05:35:26.321895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:22.566 [2024-10-07 05:35:26.322089] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.566 [2024-10-07 05:35:26.322166] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:22.566 [2024-10-07 05:35:26.322298] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.566 [2024-10-07 05:35:26.324547] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.566 [2024-10-07 05:35:26.324708] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:22.566 [2024-10-07 05:35:26.324913] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:22.566 [2024-10-07 05:35:26.325072] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:22.566 pt1 00:16:22.566 05:35:26 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:16:22.566 05:35:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:22.566 05:35:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:22.566 05:35:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:22.566 05:35:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:22.566 05:35:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:22.566 05:35:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:22.566 05:35:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:22.566 05:35:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:22.566 05:35:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:22.566 05:35:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.566 05:35:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.566 05:35:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:22.566 "name": "raid_bdev1", 00:16:22.566 "uuid": "18e95e9d-0ab9-400e-ac1a-d5e2675884cc", 00:16:22.566 "strip_size_kb": 64, 00:16:22.566 "state": "configuring", 00:16:22.566 "raid_level": "raid0", 00:16:22.566 "superblock": true, 00:16:22.566 "num_base_bdevs": 3, 00:16:22.566 "num_base_bdevs_discovered": 1, 00:16:22.566 "num_base_bdevs_operational": 3, 00:16:22.566 "base_bdevs_list": [ 00:16:22.566 { 00:16:22.566 "name": "pt1", 00:16:22.566 "uuid": "6e91ef4b-6618-5e56-a27f-e6e1e3978470", 00:16:22.566 "is_configured": true, 00:16:22.566 "data_offset": 2048, 00:16:22.566 "data_size": 63488 00:16:22.566 }, 00:16:22.566 { 00:16:22.566 "name": null, 00:16:22.566 "uuid": "a0908c85-d7ec-521f-9123-3b6960d88c0d", 00:16:22.566 "is_configured": false, 00:16:22.566 "data_offset": 2048, 00:16:22.566 "data_size": 63488 00:16:22.566 }, 00:16:22.566 { 00:16:22.566 "name": null, 00:16:22.566 "uuid": "16a5a5b0-1b2c-5ee3-9a04-9c8a03b0cfc5", 00:16:22.566 "is_configured": false, 00:16:22.566 "data_offset": 2048, 00:16:22.566 "data_size": 63488 00:16:22.566 } 00:16:22.566 ] 00:16:22.566 }' 00:16:22.566 05:35:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:22.566 05:35:26 -- common/autotest_common.sh@10 -- # set +x 00:16:23.134 05:35:27 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:16:23.134 05:35:27 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:23.393 [2024-10-07 05:35:27.342082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:23.393 [2024-10-07 05:35:27.342303] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.393 [2024-10-07 05:35:27.342385] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:23.393 [2024-10-07 05:35:27.342589] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.393 [2024-10-07 05:35:27.343128] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.393 [2024-10-07 05:35:27.343335] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:23.393 [2024-10-07 05:35:27.343614] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:23.393 [2024-10-07 05:35:27.343754] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:23.393 pt2 00:16:23.393 05:35:27 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:23.652 [2024-10-07 05:35:27.542143] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:23.652 05:35:27 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:16:23.652 05:35:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:23.652 05:35:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:23.652 05:35:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:23.652 05:35:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:23.652 05:35:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:23.652 05:35:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:23.652 05:35:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:23.652 05:35:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:23.652 05:35:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:23.652 05:35:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.652 05:35:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.911 05:35:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:23.911 "name": "raid_bdev1", 00:16:23.911 "uuid": "18e95e9d-0ab9-400e-ac1a-d5e2675884cc", 00:16:23.911 "strip_size_kb": 64, 00:16:23.911 "state": "configuring", 00:16:23.911 "raid_level": "raid0", 00:16:23.911 "superblock": true, 00:16:23.911 "num_base_bdevs": 3, 00:16:23.911 "num_base_bdevs_discovered": 1, 00:16:23.911 "num_base_bdevs_operational": 3, 00:16:23.911 "base_bdevs_list": [ 00:16:23.911 { 00:16:23.911 "name": "pt1", 00:16:23.911 "uuid": "6e91ef4b-6618-5e56-a27f-e6e1e3978470", 00:16:23.911 "is_configured": true, 00:16:23.911 "data_offset": 2048, 00:16:23.911 "data_size": 63488 00:16:23.911 }, 00:16:23.911 { 00:16:23.911 "name": null, 00:16:23.911 "uuid": "a0908c85-d7ec-521f-9123-3b6960d88c0d", 00:16:23.911 "is_configured": false, 00:16:23.911 "data_offset": 2048, 00:16:23.911 "data_size": 63488 00:16:23.911 }, 00:16:23.911 { 00:16:23.911 "name": null, 00:16:23.911 "uuid": "16a5a5b0-1b2c-5ee3-9a04-9c8a03b0cfc5", 00:16:23.911 "is_configured": false, 00:16:23.911 "data_offset": 2048, 00:16:23.911 "data_size": 63488 00:16:23.911 } 00:16:23.911 ] 00:16:23.911 }' 00:16:23.911 05:35:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:23.911 05:35:27 -- common/autotest_common.sh@10 -- # set +x 00:16:24.477 05:35:28 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:24.477 05:35:28 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:24.477 05:35:28 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:24.735 [2024-10-07 05:35:28.494282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:24.735 [2024-10-07 05:35:28.494506] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.735 [2024-10-07 05:35:28.494617] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:24.735 [2024-10-07 05:35:28.494905] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.735 [2024-10-07 05:35:28.495366] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.735 [2024-10-07 05:35:28.495539] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:24.735 [2024-10-07 05:35:28.495755] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:24.735 [2024-10-07 05:35:28.495883] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:24.735 pt2 00:16:24.735 05:35:28 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:24.735 05:35:28 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:24.735 05:35:28 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:24.735 [2024-10-07 05:35:28.694336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:24.735 [2024-10-07 05:35:28.694602] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.735 [2024-10-07 05:35:28.694697] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:24.735 [2024-10-07 05:35:28.695069] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.735 [2024-10-07 05:35:28.695640] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.735 [2024-10-07 05:35:28.695806] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:24.735 [2024-10-07 05:35:28.696031] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:24.735 [2024-10-07 05:35:28.696168] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:24.735 [2024-10-07 05:35:28.696321] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:16:24.735 [2024-10-07 05:35:28.696444] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:24.735 [2024-10-07 05:35:28.696587] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:24.735 [2024-10-07 05:35:28.696985] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:16:24.735 [2024-10-07 05:35:28.697112] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:16:24.735 [2024-10-07 05:35:28.697325] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.735 pt3 00:16:24.735 05:35:28 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:24.735 05:35:28 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:24.735 05:35:28 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:24.735 05:35:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:24.735 05:35:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:24.735 05:35:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:24.735 05:35:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:24.735 05:35:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:24.735 05:35:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:24.735 05:35:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:24.735 05:35:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:24.735 05:35:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:24.994 05:35:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.994 05:35:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.994 05:35:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:24.994 "name": "raid_bdev1", 00:16:24.994 "uuid": "18e95e9d-0ab9-400e-ac1a-d5e2675884cc", 00:16:24.994 "strip_size_kb": 64, 00:16:24.994 "state": "online", 00:16:24.994 "raid_level": "raid0", 00:16:24.994 "superblock": true, 00:16:24.994 "num_base_bdevs": 3, 00:16:24.994 "num_base_bdevs_discovered": 3, 00:16:24.994 "num_base_bdevs_operational": 3, 00:16:24.994 "base_bdevs_list": [ 00:16:24.994 { 00:16:24.994 "name": "pt1", 00:16:24.994 "uuid": "6e91ef4b-6618-5e56-a27f-e6e1e3978470", 00:16:24.994 "is_configured": true, 00:16:24.994 "data_offset": 2048, 00:16:24.994 "data_size": 63488 00:16:24.994 }, 00:16:24.994 { 00:16:24.994 "name": "pt2", 00:16:24.994 "uuid": "a0908c85-d7ec-521f-9123-3b6960d88c0d", 00:16:24.994 "is_configured": true, 00:16:24.994 "data_offset": 2048, 00:16:24.994 "data_size": 63488 00:16:24.994 }, 00:16:24.994 { 00:16:24.994 "name": "pt3", 00:16:24.994 "uuid": "16a5a5b0-1b2c-5ee3-9a04-9c8a03b0cfc5", 00:16:24.994 "is_configured": true, 00:16:24.994 "data_offset": 2048, 00:16:24.994 "data_size": 63488 00:16:24.994 } 00:16:24.994 ] 00:16:24.994 }' 00:16:24.994 05:35:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:24.994 05:35:28 -- common/autotest_common.sh@10 -- # set +x 00:16:25.562 05:35:29 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:25.562 05:35:29 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:25.822 [2024-10-07 05:35:29.710769] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:25.822 05:35:29 -- bdev/bdev_raid.sh@430 -- # '[' 18e95e9d-0ab9-400e-ac1a-d5e2675884cc '!=' 18e95e9d-0ab9-400e-ac1a-d5e2675884cc ']' 00:16:25.822 05:35:29 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:16:25.822 05:35:29 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:25.822 05:35:29 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:25.822 05:35:29 -- bdev/bdev_raid.sh@511 -- # killprocess 142921 00:16:25.822 05:35:29 -- common/autotest_common.sh@926 -- # '[' -z 142921 ']' 00:16:25.822 05:35:29 -- common/autotest_common.sh@930 -- # kill -0 142921 00:16:25.822 05:35:29 -- common/autotest_common.sh@931 -- # uname 00:16:25.822 05:35:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:25.822 05:35:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 142921 00:16:25.822 killing process with pid 142921 00:16:25.822 05:35:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:25.822 05:35:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:25.822 05:35:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 142921' 00:16:25.822 05:35:29 -- common/autotest_common.sh@945 -- # kill 142921 00:16:25.822 [2024-10-07 05:35:29.744762] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:25.822 [2024-10-07 05:35:29.744831] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:25.822 05:35:29 -- common/autotest_common.sh@950 -- # wait 142921 00:16:25.822 [2024-10-07 05:35:29.744887] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:25.822 [2024-10-07 05:35:29.744898] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:16:26.081 [2024-10-07 05:35:29.950920] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:27.042 05:35:30 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:27.042 00:16:27.042 real 0m10.506s 00:16:27.042 user 0m18.080s 00:16:27.042 sys 0m1.294s 00:16:27.042 05:35:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:27.042 05:35:30 -- common/autotest_common.sh@10 -- # set +x 00:16:27.042 ************************************ 00:16:27.042 END TEST raid_superblock_test 00:16:27.042 ************************************ 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:16:27.315 05:35:31 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:27.315 05:35:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:27.315 05:35:31 -- common/autotest_common.sh@10 -- # set +x 00:16:27.315 ************************************ 00:16:27.315 START TEST raid_state_function_test 00:16:27.315 ************************************ 00:16:27.315 05:35:31 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 false 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@226 -- # raid_pid=143576 00:16:27.315 Process raid pid: 143576 00:16:27.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 143576' 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@228 -- # waitforlisten 143576 /var/tmp/spdk-raid.sock 00:16:27.315 05:35:31 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:27.315 05:35:31 -- common/autotest_common.sh@819 -- # '[' -z 143576 ']' 00:16:27.315 05:35:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:27.315 05:35:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:27.315 05:35:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:27.315 05:35:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:27.315 05:35:31 -- common/autotest_common.sh@10 -- # set +x 00:16:27.315 [2024-10-07 05:35:31.127063] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:27.315 [2024-10-07 05:35:31.127269] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.315 [2024-10-07 05:35:31.290759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.574 [2024-10-07 05:35:31.496159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.833 [2024-10-07 05:35:31.690006] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:28.401 05:35:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:28.401 05:35:32 -- common/autotest_common.sh@852 -- # return 0 00:16:28.401 05:35:32 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:28.401 [2024-10-07 05:35:32.363915] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:28.401 [2024-10-07 05:35:32.363998] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:28.401 [2024-10-07 05:35:32.364011] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:28.401 [2024-10-07 05:35:32.364032] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:28.401 [2024-10-07 05:35:32.364040] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:28.401 [2024-10-07 05:35:32.364083] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:28.659 05:35:32 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:28.659 05:35:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:28.659 05:35:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:28.659 05:35:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:28.659 05:35:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:28.659 05:35:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:28.659 05:35:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:28.659 05:35:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:28.659 05:35:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:28.659 05:35:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:28.659 05:35:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.659 05:35:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.917 05:35:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:28.917 "name": "Existed_Raid", 00:16:28.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.917 "strip_size_kb": 64, 00:16:28.917 "state": "configuring", 00:16:28.917 "raid_level": "concat", 00:16:28.917 "superblock": false, 00:16:28.917 "num_base_bdevs": 3, 00:16:28.917 "num_base_bdevs_discovered": 0, 00:16:28.917 "num_base_bdevs_operational": 3, 00:16:28.917 "base_bdevs_list": [ 00:16:28.917 { 00:16:28.917 "name": "BaseBdev1", 00:16:28.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.917 "is_configured": false, 00:16:28.917 "data_offset": 0, 00:16:28.917 "data_size": 0 00:16:28.917 }, 00:16:28.917 { 00:16:28.917 "name": "BaseBdev2", 00:16:28.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.917 "is_configured": false, 00:16:28.917 "data_offset": 0, 00:16:28.917 "data_size": 0 00:16:28.917 }, 00:16:28.917 { 00:16:28.917 "name": "BaseBdev3", 00:16:28.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.917 "is_configured": false, 00:16:28.917 "data_offset": 0, 00:16:28.917 "data_size": 0 00:16:28.917 } 00:16:28.917 ] 00:16:28.917 }' 00:16:28.917 05:35:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:28.917 05:35:32 -- common/autotest_common.sh@10 -- # set +x 00:16:29.482 05:35:33 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:29.739 [2024-10-07 05:35:33.516025] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:29.739 [2024-10-07 05:35:33.516072] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:16:29.739 05:35:33 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:29.997 [2024-10-07 05:35:33.780057] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:29.997 [2024-10-07 05:35:33.780124] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:29.997 [2024-10-07 05:35:33.780139] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:29.997 [2024-10-07 05:35:33.780176] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:29.997 [2024-10-07 05:35:33.780184] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:29.997 [2024-10-07 05:35:33.780210] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:29.997 05:35:33 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:30.254 [2024-10-07 05:35:34.010423] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:30.254 BaseBdev1 00:16:30.254 05:35:34 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:30.254 05:35:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:30.254 05:35:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:30.254 05:35:34 -- common/autotest_common.sh@889 -- # local i 00:16:30.254 05:35:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:30.254 05:35:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:30.254 05:35:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:30.513 05:35:34 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:30.513 [ 00:16:30.513 { 00:16:30.513 "name": "BaseBdev1", 00:16:30.513 "aliases": [ 00:16:30.513 "f7510285-a5e0-49b9-a9ee-7810f03fed07" 00:16:30.513 ], 00:16:30.513 "product_name": "Malloc disk", 00:16:30.513 "block_size": 512, 00:16:30.513 "num_blocks": 65536, 00:16:30.513 "uuid": "f7510285-a5e0-49b9-a9ee-7810f03fed07", 00:16:30.513 "assigned_rate_limits": { 00:16:30.513 "rw_ios_per_sec": 0, 00:16:30.513 "rw_mbytes_per_sec": 0, 00:16:30.513 "r_mbytes_per_sec": 0, 00:16:30.513 "w_mbytes_per_sec": 0 00:16:30.513 }, 00:16:30.513 "claimed": true, 00:16:30.513 "claim_type": "exclusive_write", 00:16:30.513 "zoned": false, 00:16:30.513 "supported_io_types": { 00:16:30.513 "read": true, 00:16:30.513 "write": true, 00:16:30.513 "unmap": true, 00:16:30.513 "write_zeroes": true, 00:16:30.513 "flush": true, 00:16:30.513 "reset": true, 00:16:30.513 "compare": false, 00:16:30.513 "compare_and_write": false, 00:16:30.513 "abort": true, 00:16:30.513 "nvme_admin": false, 00:16:30.513 "nvme_io": false 00:16:30.513 }, 00:16:30.513 "memory_domains": [ 00:16:30.513 { 00:16:30.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.513 "dma_device_type": 2 00:16:30.513 } 00:16:30.513 ], 00:16:30.513 "driver_specific": {} 00:16:30.513 } 00:16:30.513 ] 00:16:30.513 05:35:34 -- common/autotest_common.sh@895 -- # return 0 00:16:30.513 05:35:34 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:30.513 05:35:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:30.513 05:35:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:30.513 05:35:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:30.513 05:35:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:30.513 05:35:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:30.513 05:35:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:30.513 05:35:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:30.513 05:35:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:30.513 05:35:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:30.513 05:35:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.513 05:35:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.770 05:35:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:30.771 "name": "Existed_Raid", 00:16:30.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.771 "strip_size_kb": 64, 00:16:30.771 "state": "configuring", 00:16:30.771 "raid_level": "concat", 00:16:30.771 "superblock": false, 00:16:30.771 "num_base_bdevs": 3, 00:16:30.771 "num_base_bdevs_discovered": 1, 00:16:30.771 "num_base_bdevs_operational": 3, 00:16:30.771 "base_bdevs_list": [ 00:16:30.771 { 00:16:30.771 "name": "BaseBdev1", 00:16:30.771 "uuid": "f7510285-a5e0-49b9-a9ee-7810f03fed07", 00:16:30.771 "is_configured": true, 00:16:30.771 "data_offset": 0, 00:16:30.771 "data_size": 65536 00:16:30.771 }, 00:16:30.771 { 00:16:30.771 "name": "BaseBdev2", 00:16:30.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.771 "is_configured": false, 00:16:30.771 "data_offset": 0, 00:16:30.771 "data_size": 0 00:16:30.771 }, 00:16:30.771 { 00:16:30.771 "name": "BaseBdev3", 00:16:30.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.771 "is_configured": false, 00:16:30.771 "data_offset": 0, 00:16:30.771 "data_size": 0 00:16:30.771 } 00:16:30.771 ] 00:16:30.771 }' 00:16:30.771 05:35:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:30.771 05:35:34 -- common/autotest_common.sh@10 -- # set +x 00:16:31.336 05:35:35 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:31.593 [2024-10-07 05:35:35.450802] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:31.593 [2024-10-07 05:35:35.450853] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:31.593 05:35:35 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:31.593 05:35:35 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:31.851 [2024-10-07 05:35:35.634946] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:31.851 [2024-10-07 05:35:35.636938] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:31.851 [2024-10-07 05:35:35.636999] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:31.851 [2024-10-07 05:35:35.637011] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:31.851 [2024-10-07 05:35:35.637036] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:31.851 05:35:35 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:31.851 05:35:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:31.851 05:35:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:31.851 05:35:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:31.851 05:35:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:31.851 05:35:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:31.851 05:35:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:31.851 05:35:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:31.851 05:35:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:31.851 05:35:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:31.851 05:35:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:31.851 05:35:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:31.851 05:35:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.851 05:35:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.109 05:35:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:32.109 "name": "Existed_Raid", 00:16:32.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.109 "strip_size_kb": 64, 00:16:32.109 "state": "configuring", 00:16:32.109 "raid_level": "concat", 00:16:32.109 "superblock": false, 00:16:32.109 "num_base_bdevs": 3, 00:16:32.109 "num_base_bdevs_discovered": 1, 00:16:32.109 "num_base_bdevs_operational": 3, 00:16:32.109 "base_bdevs_list": [ 00:16:32.109 { 00:16:32.109 "name": "BaseBdev1", 00:16:32.109 "uuid": "f7510285-a5e0-49b9-a9ee-7810f03fed07", 00:16:32.109 "is_configured": true, 00:16:32.109 "data_offset": 0, 00:16:32.109 "data_size": 65536 00:16:32.109 }, 00:16:32.109 { 00:16:32.109 "name": "BaseBdev2", 00:16:32.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.109 "is_configured": false, 00:16:32.109 "data_offset": 0, 00:16:32.109 "data_size": 0 00:16:32.109 }, 00:16:32.109 { 00:16:32.109 "name": "BaseBdev3", 00:16:32.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.109 "is_configured": false, 00:16:32.109 "data_offset": 0, 00:16:32.109 "data_size": 0 00:16:32.109 } 00:16:32.109 ] 00:16:32.109 }' 00:16:32.109 05:35:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:32.109 05:35:35 -- common/autotest_common.sh@10 -- # set +x 00:16:32.676 05:35:36 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:32.934 [2024-10-07 05:35:36.729187] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:32.934 BaseBdev2 00:16:32.934 05:35:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:32.934 05:35:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:32.934 05:35:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:32.934 05:35:36 -- common/autotest_common.sh@889 -- # local i 00:16:32.934 05:35:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:32.934 05:35:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:32.934 05:35:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:33.192 05:35:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:33.450 [ 00:16:33.450 { 00:16:33.450 "name": "BaseBdev2", 00:16:33.450 "aliases": [ 00:16:33.450 "153092f6-72c9-4186-9362-c0206393eb5d" 00:16:33.450 ], 00:16:33.450 "product_name": "Malloc disk", 00:16:33.450 "block_size": 512, 00:16:33.450 "num_blocks": 65536, 00:16:33.450 "uuid": "153092f6-72c9-4186-9362-c0206393eb5d", 00:16:33.450 "assigned_rate_limits": { 00:16:33.450 "rw_ios_per_sec": 0, 00:16:33.450 "rw_mbytes_per_sec": 0, 00:16:33.450 "r_mbytes_per_sec": 0, 00:16:33.450 "w_mbytes_per_sec": 0 00:16:33.450 }, 00:16:33.450 "claimed": true, 00:16:33.450 "claim_type": "exclusive_write", 00:16:33.450 "zoned": false, 00:16:33.450 "supported_io_types": { 00:16:33.450 "read": true, 00:16:33.450 "write": true, 00:16:33.450 "unmap": true, 00:16:33.450 "write_zeroes": true, 00:16:33.450 "flush": true, 00:16:33.450 "reset": true, 00:16:33.450 "compare": false, 00:16:33.450 "compare_and_write": false, 00:16:33.450 "abort": true, 00:16:33.450 "nvme_admin": false, 00:16:33.450 "nvme_io": false 00:16:33.450 }, 00:16:33.450 "memory_domains": [ 00:16:33.450 { 00:16:33.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.450 "dma_device_type": 2 00:16:33.450 } 00:16:33.450 ], 00:16:33.450 "driver_specific": {} 00:16:33.450 } 00:16:33.450 ] 00:16:33.450 05:35:37 -- common/autotest_common.sh@895 -- # return 0 00:16:33.450 05:35:37 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:33.450 05:35:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:33.450 05:35:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:33.450 05:35:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:33.450 05:35:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:33.450 05:35:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:33.450 05:35:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:33.450 05:35:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:33.450 05:35:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:33.450 05:35:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:33.450 05:35:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:33.450 05:35:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:33.450 05:35:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.450 05:35:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.450 05:35:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:33.450 "name": "Existed_Raid", 00:16:33.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.450 "strip_size_kb": 64, 00:16:33.450 "state": "configuring", 00:16:33.450 "raid_level": "concat", 00:16:33.450 "superblock": false, 00:16:33.451 "num_base_bdevs": 3, 00:16:33.451 "num_base_bdevs_discovered": 2, 00:16:33.451 "num_base_bdevs_operational": 3, 00:16:33.451 "base_bdevs_list": [ 00:16:33.451 { 00:16:33.451 "name": "BaseBdev1", 00:16:33.451 "uuid": "f7510285-a5e0-49b9-a9ee-7810f03fed07", 00:16:33.451 "is_configured": true, 00:16:33.451 "data_offset": 0, 00:16:33.451 "data_size": 65536 00:16:33.451 }, 00:16:33.451 { 00:16:33.451 "name": "BaseBdev2", 00:16:33.451 "uuid": "153092f6-72c9-4186-9362-c0206393eb5d", 00:16:33.451 "is_configured": true, 00:16:33.451 "data_offset": 0, 00:16:33.451 "data_size": 65536 00:16:33.451 }, 00:16:33.451 { 00:16:33.451 "name": "BaseBdev3", 00:16:33.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.451 "is_configured": false, 00:16:33.451 "data_offset": 0, 00:16:33.451 "data_size": 0 00:16:33.451 } 00:16:33.451 ] 00:16:33.451 }' 00:16:33.451 05:35:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:33.451 05:35:37 -- common/autotest_common.sh@10 -- # set +x 00:16:34.017 05:35:37 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:34.585 [2024-10-07 05:35:38.259611] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:34.585 [2024-10-07 05:35:38.259681] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:16:34.585 [2024-10-07 05:35:38.259692] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:34.585 [2024-10-07 05:35:38.259815] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:34.585 [2024-10-07 05:35:38.260213] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:16:34.585 [2024-10-07 05:35:38.260239] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:16:34.585 [2024-10-07 05:35:38.260515] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.586 BaseBdev3 00:16:34.586 05:35:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:34.586 05:35:38 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:34.586 05:35:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:34.586 05:35:38 -- common/autotest_common.sh@889 -- # local i 00:16:34.586 05:35:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:34.586 05:35:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:34.586 05:35:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:34.586 05:35:38 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:34.843 [ 00:16:34.843 { 00:16:34.843 "name": "BaseBdev3", 00:16:34.843 "aliases": [ 00:16:34.843 "c2df743d-b22b-4cbe-89d3-b2b17ef5290b" 00:16:34.843 ], 00:16:34.843 "product_name": "Malloc disk", 00:16:34.843 "block_size": 512, 00:16:34.843 "num_blocks": 65536, 00:16:34.843 "uuid": "c2df743d-b22b-4cbe-89d3-b2b17ef5290b", 00:16:34.843 "assigned_rate_limits": { 00:16:34.843 "rw_ios_per_sec": 0, 00:16:34.843 "rw_mbytes_per_sec": 0, 00:16:34.843 "r_mbytes_per_sec": 0, 00:16:34.843 "w_mbytes_per_sec": 0 00:16:34.843 }, 00:16:34.843 "claimed": true, 00:16:34.843 "claim_type": "exclusive_write", 00:16:34.843 "zoned": false, 00:16:34.843 "supported_io_types": { 00:16:34.843 "read": true, 00:16:34.843 "write": true, 00:16:34.843 "unmap": true, 00:16:34.843 "write_zeroes": true, 00:16:34.843 "flush": true, 00:16:34.843 "reset": true, 00:16:34.843 "compare": false, 00:16:34.843 "compare_and_write": false, 00:16:34.843 "abort": true, 00:16:34.843 "nvme_admin": false, 00:16:34.843 "nvme_io": false 00:16:34.843 }, 00:16:34.843 "memory_domains": [ 00:16:34.843 { 00:16:34.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.843 "dma_device_type": 2 00:16:34.843 } 00:16:34.843 ], 00:16:34.843 "driver_specific": {} 00:16:34.843 } 00:16:34.843 ] 00:16:34.843 05:35:38 -- common/autotest_common.sh@895 -- # return 0 00:16:34.843 05:35:38 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:34.843 05:35:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:34.843 05:35:38 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:34.843 05:35:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:34.843 05:35:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:34.843 05:35:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:34.843 05:35:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:34.843 05:35:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:34.843 05:35:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:34.843 05:35:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:34.843 05:35:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:34.843 05:35:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:34.843 05:35:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.843 05:35:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.102 05:35:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:35.102 "name": "Existed_Raid", 00:16:35.102 "uuid": "194e6495-9de1-49d5-89f2-7cce7ab648f1", 00:16:35.102 "strip_size_kb": 64, 00:16:35.102 "state": "online", 00:16:35.102 "raid_level": "concat", 00:16:35.102 "superblock": false, 00:16:35.102 "num_base_bdevs": 3, 00:16:35.102 "num_base_bdevs_discovered": 3, 00:16:35.102 "num_base_bdevs_operational": 3, 00:16:35.102 "base_bdevs_list": [ 00:16:35.102 { 00:16:35.102 "name": "BaseBdev1", 00:16:35.102 "uuid": "f7510285-a5e0-49b9-a9ee-7810f03fed07", 00:16:35.102 "is_configured": true, 00:16:35.102 "data_offset": 0, 00:16:35.102 "data_size": 65536 00:16:35.102 }, 00:16:35.102 { 00:16:35.102 "name": "BaseBdev2", 00:16:35.102 "uuid": "153092f6-72c9-4186-9362-c0206393eb5d", 00:16:35.102 "is_configured": true, 00:16:35.102 "data_offset": 0, 00:16:35.102 "data_size": 65536 00:16:35.102 }, 00:16:35.102 { 00:16:35.102 "name": "BaseBdev3", 00:16:35.102 "uuid": "c2df743d-b22b-4cbe-89d3-b2b17ef5290b", 00:16:35.102 "is_configured": true, 00:16:35.102 "data_offset": 0, 00:16:35.102 "data_size": 65536 00:16:35.102 } 00:16:35.102 ] 00:16:35.102 }' 00:16:35.102 05:35:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:35.102 05:35:38 -- common/autotest_common.sh@10 -- # set +x 00:16:35.669 05:35:39 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:35.927 [2024-10-07 05:35:39.724085] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:35.927 [2024-10-07 05:35:39.724133] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:35.927 [2024-10-07 05:35:39.724204] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:35.927 05:35:39 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:35.927 05:35:39 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:16:35.927 05:35:39 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:35.927 05:35:39 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:35.927 05:35:39 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:35.927 05:35:39 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:16:35.927 05:35:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:35.927 05:35:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:35.927 05:35:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:35.927 05:35:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:35.927 05:35:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:35.927 05:35:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:35.927 05:35:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:35.927 05:35:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:35.927 05:35:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:35.927 05:35:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.928 05:35:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.186 05:35:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:36.186 "name": "Existed_Raid", 00:16:36.186 "uuid": "194e6495-9de1-49d5-89f2-7cce7ab648f1", 00:16:36.186 "strip_size_kb": 64, 00:16:36.186 "state": "offline", 00:16:36.186 "raid_level": "concat", 00:16:36.186 "superblock": false, 00:16:36.186 "num_base_bdevs": 3, 00:16:36.186 "num_base_bdevs_discovered": 2, 00:16:36.186 "num_base_bdevs_operational": 2, 00:16:36.186 "base_bdevs_list": [ 00:16:36.186 { 00:16:36.186 "name": null, 00:16:36.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.186 "is_configured": false, 00:16:36.186 "data_offset": 0, 00:16:36.186 "data_size": 65536 00:16:36.186 }, 00:16:36.186 { 00:16:36.186 "name": "BaseBdev2", 00:16:36.186 "uuid": "153092f6-72c9-4186-9362-c0206393eb5d", 00:16:36.186 "is_configured": true, 00:16:36.186 "data_offset": 0, 00:16:36.186 "data_size": 65536 00:16:36.186 }, 00:16:36.186 { 00:16:36.186 "name": "BaseBdev3", 00:16:36.186 "uuid": "c2df743d-b22b-4cbe-89d3-b2b17ef5290b", 00:16:36.186 "is_configured": true, 00:16:36.186 "data_offset": 0, 00:16:36.186 "data_size": 65536 00:16:36.186 } 00:16:36.186 ] 00:16:36.186 }' 00:16:36.186 05:35:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:36.186 05:35:40 -- common/autotest_common.sh@10 -- # set +x 00:16:36.752 05:35:40 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:36.752 05:35:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:36.752 05:35:40 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.752 05:35:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:37.011 05:35:40 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:37.011 05:35:40 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:37.011 05:35:40 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:37.269 [2024-10-07 05:35:41.168383] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:37.527 05:35:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:37.527 05:35:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:37.527 05:35:41 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.527 05:35:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:37.786 05:35:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:37.786 05:35:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:37.786 05:35:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:38.045 [2024-10-07 05:35:41.800032] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:38.045 [2024-10-07 05:35:41.800097] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:16:38.045 05:35:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:38.045 05:35:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:38.045 05:35:41 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:38.045 05:35:41 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.304 05:35:42 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:38.304 05:35:42 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:38.304 05:35:42 -- bdev/bdev_raid.sh@287 -- # killprocess 143576 00:16:38.304 05:35:42 -- common/autotest_common.sh@926 -- # '[' -z 143576 ']' 00:16:38.304 05:35:42 -- common/autotest_common.sh@930 -- # kill -0 143576 00:16:38.304 05:35:42 -- common/autotest_common.sh@931 -- # uname 00:16:38.304 05:35:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:38.304 05:35:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 143576 00:16:38.304 05:35:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:38.304 05:35:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:38.304 05:35:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 143576' 00:16:38.304 killing process with pid 143576 00:16:38.304 05:35:42 -- common/autotest_common.sh@945 -- # kill 143576 00:16:38.304 05:35:42 -- common/autotest_common.sh@950 -- # wait 143576 00:16:38.304 [2024-10-07 05:35:42.194625] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:38.304 [2024-10-07 05:35:42.194775] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:39.238 ************************************ 00:16:39.238 END TEST raid_state_function_test 00:16:39.238 ************************************ 00:16:39.238 05:35:43 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:39.238 00:16:39.238 real 0m12.159s 00:16:39.238 user 0m21.333s 00:16:39.238 sys 0m1.566s 00:16:39.238 05:35:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:39.238 05:35:43 -- common/autotest_common.sh@10 -- # set +x 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:16:39.498 05:35:43 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:39.498 05:35:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:39.498 05:35:43 -- common/autotest_common.sh@10 -- # set +x 00:16:39.498 ************************************ 00:16:39.498 START TEST raid_state_function_test_sb 00:16:39.498 ************************************ 00:16:39.498 05:35:43 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 true 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@226 -- # raid_pid=144400 00:16:39.498 Process raid pid: 144400 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 144400' 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@228 -- # waitforlisten 144400 /var/tmp/spdk-raid.sock 00:16:39.498 05:35:43 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:39.498 05:35:43 -- common/autotest_common.sh@819 -- # '[' -z 144400 ']' 00:16:39.498 05:35:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:39.498 05:35:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:39.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:39.498 05:35:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:39.498 05:35:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:39.498 05:35:43 -- common/autotest_common.sh@10 -- # set +x 00:16:39.498 [2024-10-07 05:35:43.335801] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:39.498 [2024-10-07 05:35:43.335960] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.757 [2024-10-07 05:35:43.490589] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.757 [2024-10-07 05:35:43.682981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.016 [2024-10-07 05:35:43.874845] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:40.582 05:35:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:40.582 05:35:44 -- common/autotest_common.sh@852 -- # return 0 00:16:40.582 05:35:44 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:40.582 [2024-10-07 05:35:44.518319] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:40.582 [2024-10-07 05:35:44.518401] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:40.582 [2024-10-07 05:35:44.518414] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:40.582 [2024-10-07 05:35:44.518433] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:40.582 [2024-10-07 05:35:44.518441] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:40.582 [2024-10-07 05:35:44.518483] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:40.582 05:35:44 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:40.582 05:35:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:40.582 05:35:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:40.582 05:35:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:40.582 05:35:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:40.582 05:35:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:40.582 05:35:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:40.582 05:35:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:40.582 05:35:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:40.582 05:35:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:40.582 05:35:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.582 05:35:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.840 05:35:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:40.840 "name": "Existed_Raid", 00:16:40.840 "uuid": "76cadd4d-ccfc-46df-9928-cd3760a5add2", 00:16:40.840 "strip_size_kb": 64, 00:16:40.840 "state": "configuring", 00:16:40.840 "raid_level": "concat", 00:16:40.840 "superblock": true, 00:16:40.840 "num_base_bdevs": 3, 00:16:40.840 "num_base_bdevs_discovered": 0, 00:16:40.840 "num_base_bdevs_operational": 3, 00:16:40.840 "base_bdevs_list": [ 00:16:40.840 { 00:16:40.840 "name": "BaseBdev1", 00:16:40.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.840 "is_configured": false, 00:16:40.840 "data_offset": 0, 00:16:40.840 "data_size": 0 00:16:40.840 }, 00:16:40.840 { 00:16:40.840 "name": "BaseBdev2", 00:16:40.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.840 "is_configured": false, 00:16:40.840 "data_offset": 0, 00:16:40.840 "data_size": 0 00:16:40.840 }, 00:16:40.840 { 00:16:40.840 "name": "BaseBdev3", 00:16:40.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.840 "is_configured": false, 00:16:40.840 "data_offset": 0, 00:16:40.840 "data_size": 0 00:16:40.840 } 00:16:40.840 ] 00:16:40.840 }' 00:16:40.840 05:35:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:40.840 05:35:44 -- common/autotest_common.sh@10 -- # set +x 00:16:41.406 05:35:45 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:41.664 [2024-10-07 05:35:45.558360] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:41.664 [2024-10-07 05:35:45.558389] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:16:41.664 05:35:45 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:41.923 [2024-10-07 05:35:45.750472] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:41.923 [2024-10-07 05:35:45.750541] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:41.923 [2024-10-07 05:35:45.750553] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:41.923 [2024-10-07 05:35:45.750579] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:41.923 [2024-10-07 05:35:45.750587] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:41.923 [2024-10-07 05:35:45.750609] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:41.923 05:35:45 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:42.182 [2024-10-07 05:35:46.052501] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.182 BaseBdev1 00:16:42.182 05:35:46 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:42.182 05:35:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:42.182 05:35:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:42.182 05:35:46 -- common/autotest_common.sh@889 -- # local i 00:16:42.182 05:35:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:42.182 05:35:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:42.182 05:35:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:42.440 05:35:46 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:42.698 [ 00:16:42.698 { 00:16:42.698 "name": "BaseBdev1", 00:16:42.698 "aliases": [ 00:16:42.698 "295af45e-f9dc-42ef-a9d3-99228108c991" 00:16:42.698 ], 00:16:42.698 "product_name": "Malloc disk", 00:16:42.698 "block_size": 512, 00:16:42.698 "num_blocks": 65536, 00:16:42.698 "uuid": "295af45e-f9dc-42ef-a9d3-99228108c991", 00:16:42.698 "assigned_rate_limits": { 00:16:42.698 "rw_ios_per_sec": 0, 00:16:42.698 "rw_mbytes_per_sec": 0, 00:16:42.698 "r_mbytes_per_sec": 0, 00:16:42.698 "w_mbytes_per_sec": 0 00:16:42.698 }, 00:16:42.698 "claimed": true, 00:16:42.698 "claim_type": "exclusive_write", 00:16:42.698 "zoned": false, 00:16:42.698 "supported_io_types": { 00:16:42.698 "read": true, 00:16:42.698 "write": true, 00:16:42.698 "unmap": true, 00:16:42.698 "write_zeroes": true, 00:16:42.698 "flush": true, 00:16:42.698 "reset": true, 00:16:42.698 "compare": false, 00:16:42.698 "compare_and_write": false, 00:16:42.698 "abort": true, 00:16:42.698 "nvme_admin": false, 00:16:42.698 "nvme_io": false 00:16:42.698 }, 00:16:42.698 "memory_domains": [ 00:16:42.698 { 00:16:42.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.698 "dma_device_type": 2 00:16:42.698 } 00:16:42.698 ], 00:16:42.698 "driver_specific": {} 00:16:42.698 } 00:16:42.698 ] 00:16:42.698 05:35:46 -- common/autotest_common.sh@895 -- # return 0 00:16:42.698 05:35:46 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:42.698 05:35:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:42.698 05:35:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:42.698 05:35:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:42.698 05:35:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:42.698 05:35:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:42.698 05:35:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:42.698 05:35:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:42.698 05:35:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:42.698 05:35:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:42.698 05:35:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.698 05:35:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.956 05:35:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:42.956 "name": "Existed_Raid", 00:16:42.956 "uuid": "919709fd-c7d1-4994-876e-cc40525d60b8", 00:16:42.956 "strip_size_kb": 64, 00:16:42.956 "state": "configuring", 00:16:42.956 "raid_level": "concat", 00:16:42.956 "superblock": true, 00:16:42.956 "num_base_bdevs": 3, 00:16:42.956 "num_base_bdevs_discovered": 1, 00:16:42.956 "num_base_bdevs_operational": 3, 00:16:42.956 "base_bdevs_list": [ 00:16:42.956 { 00:16:42.956 "name": "BaseBdev1", 00:16:42.956 "uuid": "295af45e-f9dc-42ef-a9d3-99228108c991", 00:16:42.956 "is_configured": true, 00:16:42.956 "data_offset": 2048, 00:16:42.956 "data_size": 63488 00:16:42.956 }, 00:16:42.956 { 00:16:42.956 "name": "BaseBdev2", 00:16:42.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.956 "is_configured": false, 00:16:42.956 "data_offset": 0, 00:16:42.956 "data_size": 0 00:16:42.956 }, 00:16:42.956 { 00:16:42.956 "name": "BaseBdev3", 00:16:42.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.956 "is_configured": false, 00:16:42.956 "data_offset": 0, 00:16:42.956 "data_size": 0 00:16:42.956 } 00:16:42.956 ] 00:16:42.956 }' 00:16:42.956 05:35:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:42.956 05:35:46 -- common/autotest_common.sh@10 -- # set +x 00:16:43.522 05:35:47 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:43.810 [2024-10-07 05:35:47.600865] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:43.810 [2024-10-07 05:35:47.600939] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:43.810 05:35:47 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:43.810 05:35:47 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:44.070 05:35:47 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:44.329 BaseBdev1 00:16:44.329 05:35:48 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:44.329 05:35:48 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:44.329 05:35:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:44.329 05:35:48 -- common/autotest_common.sh@889 -- # local i 00:16:44.329 05:35:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:44.329 05:35:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:44.329 05:35:48 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:44.586 05:35:48 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:44.868 [ 00:16:44.868 { 00:16:44.868 "name": "BaseBdev1", 00:16:44.868 "aliases": [ 00:16:44.868 "93efa219-bc79-4e9f-a461-537a93f52484" 00:16:44.868 ], 00:16:44.868 "product_name": "Malloc disk", 00:16:44.868 "block_size": 512, 00:16:44.868 "num_blocks": 65536, 00:16:44.868 "uuid": "93efa219-bc79-4e9f-a461-537a93f52484", 00:16:44.868 "assigned_rate_limits": { 00:16:44.868 "rw_ios_per_sec": 0, 00:16:44.868 "rw_mbytes_per_sec": 0, 00:16:44.868 "r_mbytes_per_sec": 0, 00:16:44.868 "w_mbytes_per_sec": 0 00:16:44.868 }, 00:16:44.868 "claimed": false, 00:16:44.868 "zoned": false, 00:16:44.868 "supported_io_types": { 00:16:44.868 "read": true, 00:16:44.868 "write": true, 00:16:44.868 "unmap": true, 00:16:44.868 "write_zeroes": true, 00:16:44.868 "flush": true, 00:16:44.868 "reset": true, 00:16:44.868 "compare": false, 00:16:44.868 "compare_and_write": false, 00:16:44.868 "abort": true, 00:16:44.868 "nvme_admin": false, 00:16:44.868 "nvme_io": false 00:16:44.868 }, 00:16:44.868 "memory_domains": [ 00:16:44.868 { 00:16:44.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.868 "dma_device_type": 2 00:16:44.868 } 00:16:44.868 ], 00:16:44.868 "driver_specific": {} 00:16:44.868 } 00:16:44.868 ] 00:16:44.868 05:35:48 -- common/autotest_common.sh@895 -- # return 0 00:16:44.868 05:35:48 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:45.127 [2024-10-07 05:35:49.054358] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:45.127 [2024-10-07 05:35:49.056459] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:45.127 [2024-10-07 05:35:49.056523] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:45.127 [2024-10-07 05:35:49.056535] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:45.127 [2024-10-07 05:35:49.056562] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:45.127 05:35:49 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:45.127 05:35:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:45.127 05:35:49 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:45.127 05:35:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:45.127 05:35:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:45.127 05:35:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:45.127 05:35:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:45.127 05:35:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:45.127 05:35:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:45.127 05:35:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:45.127 05:35:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:45.127 05:35:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:45.127 05:35:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.127 05:35:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.384 05:35:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:45.384 "name": "Existed_Raid", 00:16:45.384 "uuid": "23fa62d7-d9de-4f28-9c3b-777a8bfdcc7b", 00:16:45.384 "strip_size_kb": 64, 00:16:45.384 "state": "configuring", 00:16:45.384 "raid_level": "concat", 00:16:45.384 "superblock": true, 00:16:45.384 "num_base_bdevs": 3, 00:16:45.384 "num_base_bdevs_discovered": 1, 00:16:45.384 "num_base_bdevs_operational": 3, 00:16:45.384 "base_bdevs_list": [ 00:16:45.384 { 00:16:45.384 "name": "BaseBdev1", 00:16:45.384 "uuid": "93efa219-bc79-4e9f-a461-537a93f52484", 00:16:45.384 "is_configured": true, 00:16:45.384 "data_offset": 2048, 00:16:45.384 "data_size": 63488 00:16:45.384 }, 00:16:45.384 { 00:16:45.384 "name": "BaseBdev2", 00:16:45.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.384 "is_configured": false, 00:16:45.384 "data_offset": 0, 00:16:45.384 "data_size": 0 00:16:45.384 }, 00:16:45.384 { 00:16:45.384 "name": "BaseBdev3", 00:16:45.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.384 "is_configured": false, 00:16:45.384 "data_offset": 0, 00:16:45.384 "data_size": 0 00:16:45.384 } 00:16:45.384 ] 00:16:45.384 }' 00:16:45.384 05:35:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:45.384 05:35:49 -- common/autotest_common.sh@10 -- # set +x 00:16:46.317 05:35:50 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:46.574 [2024-10-07 05:35:50.324997] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:46.574 BaseBdev2 00:16:46.574 05:35:50 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:46.574 05:35:50 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:46.574 05:35:50 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:46.574 05:35:50 -- common/autotest_common.sh@889 -- # local i 00:16:46.574 05:35:50 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:46.574 05:35:50 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:46.574 05:35:50 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:46.832 05:35:50 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:47.089 [ 00:16:47.089 { 00:16:47.089 "name": "BaseBdev2", 00:16:47.089 "aliases": [ 00:16:47.089 "7bfc45a8-316b-4d94-ab52-c446beae73fc" 00:16:47.089 ], 00:16:47.089 "product_name": "Malloc disk", 00:16:47.089 "block_size": 512, 00:16:47.089 "num_blocks": 65536, 00:16:47.089 "uuid": "7bfc45a8-316b-4d94-ab52-c446beae73fc", 00:16:47.089 "assigned_rate_limits": { 00:16:47.089 "rw_ios_per_sec": 0, 00:16:47.089 "rw_mbytes_per_sec": 0, 00:16:47.089 "r_mbytes_per_sec": 0, 00:16:47.089 "w_mbytes_per_sec": 0 00:16:47.089 }, 00:16:47.089 "claimed": true, 00:16:47.089 "claim_type": "exclusive_write", 00:16:47.089 "zoned": false, 00:16:47.089 "supported_io_types": { 00:16:47.089 "read": true, 00:16:47.089 "write": true, 00:16:47.089 "unmap": true, 00:16:47.089 "write_zeroes": true, 00:16:47.089 "flush": true, 00:16:47.089 "reset": true, 00:16:47.089 "compare": false, 00:16:47.089 "compare_and_write": false, 00:16:47.089 "abort": true, 00:16:47.089 "nvme_admin": false, 00:16:47.089 "nvme_io": false 00:16:47.089 }, 00:16:47.089 "memory_domains": [ 00:16:47.089 { 00:16:47.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.089 "dma_device_type": 2 00:16:47.089 } 00:16:47.089 ], 00:16:47.089 "driver_specific": {} 00:16:47.089 } 00:16:47.089 ] 00:16:47.089 05:35:50 -- common/autotest_common.sh@895 -- # return 0 00:16:47.089 05:35:50 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:47.089 05:35:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:47.089 05:35:50 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:47.089 05:35:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:47.089 05:35:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:47.089 05:35:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:47.089 05:35:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:47.089 05:35:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:47.090 05:35:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:47.090 05:35:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:47.090 05:35:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:47.090 05:35:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:47.090 05:35:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.090 05:35:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.348 05:35:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:47.348 "name": "Existed_Raid", 00:16:47.348 "uuid": "23fa62d7-d9de-4f28-9c3b-777a8bfdcc7b", 00:16:47.348 "strip_size_kb": 64, 00:16:47.348 "state": "configuring", 00:16:47.348 "raid_level": "concat", 00:16:47.348 "superblock": true, 00:16:47.348 "num_base_bdevs": 3, 00:16:47.348 "num_base_bdevs_discovered": 2, 00:16:47.348 "num_base_bdevs_operational": 3, 00:16:47.348 "base_bdevs_list": [ 00:16:47.348 { 00:16:47.348 "name": "BaseBdev1", 00:16:47.348 "uuid": "93efa219-bc79-4e9f-a461-537a93f52484", 00:16:47.348 "is_configured": true, 00:16:47.348 "data_offset": 2048, 00:16:47.348 "data_size": 63488 00:16:47.348 }, 00:16:47.348 { 00:16:47.348 "name": "BaseBdev2", 00:16:47.348 "uuid": "7bfc45a8-316b-4d94-ab52-c446beae73fc", 00:16:47.348 "is_configured": true, 00:16:47.348 "data_offset": 2048, 00:16:47.348 "data_size": 63488 00:16:47.348 }, 00:16:47.348 { 00:16:47.348 "name": "BaseBdev3", 00:16:47.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.348 "is_configured": false, 00:16:47.348 "data_offset": 0, 00:16:47.348 "data_size": 0 00:16:47.348 } 00:16:47.348 ] 00:16:47.348 }' 00:16:47.348 05:35:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:47.348 05:35:51 -- common/autotest_common.sh@10 -- # set +x 00:16:47.915 05:35:51 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:48.173 [2024-10-07 05:35:52.085386] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:48.173 [2024-10-07 05:35:52.085606] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:16:48.173 [2024-10-07 05:35:52.085621] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:48.173 [2024-10-07 05:35:52.085774] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:16:48.173 [2024-10-07 05:35:52.086098] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:16:48.173 [2024-10-07 05:35:52.086128] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:16:48.173 [2024-10-07 05:35:52.086260] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.173 BaseBdev3 00:16:48.173 05:35:52 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:48.173 05:35:52 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:48.173 05:35:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:48.173 05:35:52 -- common/autotest_common.sh@889 -- # local i 00:16:48.173 05:35:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:48.173 05:35:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:48.173 05:35:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:48.431 05:35:52 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:48.689 [ 00:16:48.689 { 00:16:48.689 "name": "BaseBdev3", 00:16:48.689 "aliases": [ 00:16:48.689 "6fdf15ce-3c23-4fa9-8784-7927470f728d" 00:16:48.689 ], 00:16:48.689 "product_name": "Malloc disk", 00:16:48.689 "block_size": 512, 00:16:48.689 "num_blocks": 65536, 00:16:48.689 "uuid": "6fdf15ce-3c23-4fa9-8784-7927470f728d", 00:16:48.689 "assigned_rate_limits": { 00:16:48.689 "rw_ios_per_sec": 0, 00:16:48.689 "rw_mbytes_per_sec": 0, 00:16:48.689 "r_mbytes_per_sec": 0, 00:16:48.689 "w_mbytes_per_sec": 0 00:16:48.689 }, 00:16:48.689 "claimed": true, 00:16:48.689 "claim_type": "exclusive_write", 00:16:48.689 "zoned": false, 00:16:48.689 "supported_io_types": { 00:16:48.689 "read": true, 00:16:48.689 "write": true, 00:16:48.689 "unmap": true, 00:16:48.689 "write_zeroes": true, 00:16:48.690 "flush": true, 00:16:48.690 "reset": true, 00:16:48.690 "compare": false, 00:16:48.690 "compare_and_write": false, 00:16:48.690 "abort": true, 00:16:48.690 "nvme_admin": false, 00:16:48.690 "nvme_io": false 00:16:48.690 }, 00:16:48.690 "memory_domains": [ 00:16:48.690 { 00:16:48.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.690 "dma_device_type": 2 00:16:48.690 } 00:16:48.690 ], 00:16:48.690 "driver_specific": {} 00:16:48.690 } 00:16:48.690 ] 00:16:48.690 05:35:52 -- common/autotest_common.sh@895 -- # return 0 00:16:48.690 05:35:52 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:48.690 05:35:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:48.690 05:35:52 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:48.690 05:35:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:48.690 05:35:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:48.690 05:35:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:48.690 05:35:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:48.690 05:35:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:48.690 05:35:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:48.690 05:35:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:48.690 05:35:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:48.690 05:35:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:48.690 05:35:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.690 05:35:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.948 05:35:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:48.948 "name": "Existed_Raid", 00:16:48.948 "uuid": "23fa62d7-d9de-4f28-9c3b-777a8bfdcc7b", 00:16:48.948 "strip_size_kb": 64, 00:16:48.948 "state": "online", 00:16:48.948 "raid_level": "concat", 00:16:48.948 "superblock": true, 00:16:48.948 "num_base_bdevs": 3, 00:16:48.948 "num_base_bdevs_discovered": 3, 00:16:48.948 "num_base_bdevs_operational": 3, 00:16:48.948 "base_bdevs_list": [ 00:16:48.948 { 00:16:48.948 "name": "BaseBdev1", 00:16:48.948 "uuid": "93efa219-bc79-4e9f-a461-537a93f52484", 00:16:48.948 "is_configured": true, 00:16:48.948 "data_offset": 2048, 00:16:48.948 "data_size": 63488 00:16:48.948 }, 00:16:48.948 { 00:16:48.948 "name": "BaseBdev2", 00:16:48.948 "uuid": "7bfc45a8-316b-4d94-ab52-c446beae73fc", 00:16:48.948 "is_configured": true, 00:16:48.948 "data_offset": 2048, 00:16:48.948 "data_size": 63488 00:16:48.948 }, 00:16:48.948 { 00:16:48.948 "name": "BaseBdev3", 00:16:48.948 "uuid": "6fdf15ce-3c23-4fa9-8784-7927470f728d", 00:16:48.948 "is_configured": true, 00:16:48.948 "data_offset": 2048, 00:16:48.948 "data_size": 63488 00:16:48.948 } 00:16:48.948 ] 00:16:48.948 }' 00:16:48.948 05:35:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:48.948 05:35:52 -- common/autotest_common.sh@10 -- # set +x 00:16:49.515 05:35:53 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:49.773 [2024-10-07 05:35:53.749929] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:49.773 [2024-10-07 05:35:53.749973] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:49.773 [2024-10-07 05:35:53.750035] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:50.032 05:35:53 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:50.032 05:35:53 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:16:50.032 05:35:53 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:50.032 05:35:53 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:50.032 05:35:53 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:50.032 05:35:53 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:16:50.032 05:35:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:50.032 05:35:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:50.032 05:35:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:50.032 05:35:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:50.032 05:35:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:50.032 05:35:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:50.032 05:35:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:50.032 05:35:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:50.032 05:35:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:50.032 05:35:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.032 05:35:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.290 05:35:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:50.291 "name": "Existed_Raid", 00:16:50.291 "uuid": "23fa62d7-d9de-4f28-9c3b-777a8bfdcc7b", 00:16:50.291 "strip_size_kb": 64, 00:16:50.291 "state": "offline", 00:16:50.291 "raid_level": "concat", 00:16:50.291 "superblock": true, 00:16:50.291 "num_base_bdevs": 3, 00:16:50.291 "num_base_bdevs_discovered": 2, 00:16:50.291 "num_base_bdevs_operational": 2, 00:16:50.291 "base_bdevs_list": [ 00:16:50.291 { 00:16:50.291 "name": null, 00:16:50.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.291 "is_configured": false, 00:16:50.291 "data_offset": 2048, 00:16:50.291 "data_size": 63488 00:16:50.291 }, 00:16:50.291 { 00:16:50.291 "name": "BaseBdev2", 00:16:50.291 "uuid": "7bfc45a8-316b-4d94-ab52-c446beae73fc", 00:16:50.291 "is_configured": true, 00:16:50.291 "data_offset": 2048, 00:16:50.291 "data_size": 63488 00:16:50.291 }, 00:16:50.291 { 00:16:50.291 "name": "BaseBdev3", 00:16:50.291 "uuid": "6fdf15ce-3c23-4fa9-8784-7927470f728d", 00:16:50.291 "is_configured": true, 00:16:50.291 "data_offset": 2048, 00:16:50.291 "data_size": 63488 00:16:50.291 } 00:16:50.291 ] 00:16:50.291 }' 00:16:50.291 05:35:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:50.291 05:35:54 -- common/autotest_common.sh@10 -- # set +x 00:16:50.857 05:35:54 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:50.857 05:35:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:50.857 05:35:54 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.857 05:35:54 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:51.115 05:35:55 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:51.115 05:35:55 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:51.115 05:35:55 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:51.373 [2024-10-07 05:35:55.303864] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:51.631 05:35:55 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:51.631 05:35:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:51.631 05:35:55 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.631 05:35:55 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:51.889 05:35:55 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:51.889 05:35:55 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:51.889 05:35:55 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:51.889 [2024-10-07 05:35:55.831067] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:51.889 [2024-10-07 05:35:55.831139] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:16:52.148 05:35:55 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:52.148 05:35:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:52.148 05:35:55 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.148 05:35:55 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:52.407 05:35:56 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:52.407 05:35:56 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:52.407 05:35:56 -- bdev/bdev_raid.sh@287 -- # killprocess 144400 00:16:52.407 05:35:56 -- common/autotest_common.sh@926 -- # '[' -z 144400 ']' 00:16:52.407 05:35:56 -- common/autotest_common.sh@930 -- # kill -0 144400 00:16:52.407 05:35:56 -- common/autotest_common.sh@931 -- # uname 00:16:52.407 05:35:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:52.407 05:35:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 144400 00:16:52.407 05:35:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:52.407 05:35:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:52.407 killing process with pid 144400 00:16:52.407 05:35:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 144400' 00:16:52.407 05:35:56 -- common/autotest_common.sh@945 -- # kill 144400 00:16:52.407 [2024-10-07 05:35:56.227569] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:52.407 05:35:56 -- common/autotest_common.sh@950 -- # wait 144400 00:16:52.407 [2024-10-07 05:35:56.227701] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:53.345 05:35:57 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:53.345 00:16:53.345 real 0m14.001s 00:16:53.345 user 0m24.643s 00:16:53.345 sys 0m1.782s 00:16:53.345 05:35:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:53.345 05:35:57 -- common/autotest_common.sh@10 -- # set +x 00:16:53.345 ************************************ 00:16:53.345 END TEST raid_state_function_test_sb 00:16:53.345 ************************************ 00:16:53.604 05:35:57 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:16:53.604 05:35:57 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:53.604 05:35:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:53.604 05:35:57 -- common/autotest_common.sh@10 -- # set +x 00:16:53.604 ************************************ 00:16:53.604 START TEST raid_superblock_test 00:16:53.604 ************************************ 00:16:53.604 05:35:57 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 3 00:16:53.604 05:35:57 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:16:53.604 05:35:57 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:16:53.604 05:35:57 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:53.604 05:35:57 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:53.604 05:35:57 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:53.604 05:35:57 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:53.604 05:35:57 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:53.604 05:35:57 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:53.604 05:35:57 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:53.604 05:35:57 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:53.604 05:35:57 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:53.604 05:35:57 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:53.604 05:35:57 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:53.604 05:35:57 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:16:53.604 05:35:57 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:16:53.604 05:35:57 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:16:53.604 05:35:57 -- bdev/bdev_raid.sh@357 -- # raid_pid=145755 00:16:53.604 05:35:57 -- bdev/bdev_raid.sh@358 -- # waitforlisten 145755 /var/tmp/spdk-raid.sock 00:16:53.604 05:35:57 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:53.604 05:35:57 -- common/autotest_common.sh@819 -- # '[' -z 145755 ']' 00:16:53.604 05:35:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:53.604 05:35:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:53.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:53.604 05:35:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:53.604 05:35:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:53.604 05:35:57 -- common/autotest_common.sh@10 -- # set +x 00:16:53.604 [2024-10-07 05:35:57.407942] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:53.604 [2024-10-07 05:35:57.408126] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145755 ] 00:16:53.604 [2024-10-07 05:35:57.569374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.863 [2024-10-07 05:35:57.767173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.122 [2024-10-07 05:35:57.958546] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:54.381 05:35:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:54.381 05:35:58 -- common/autotest_common.sh@852 -- # return 0 00:16:54.381 05:35:58 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:54.381 05:35:58 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:54.381 05:35:58 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:54.381 05:35:58 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:54.381 05:35:58 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:54.381 05:35:58 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:54.381 05:35:58 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:54.381 05:35:58 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:54.381 05:35:58 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:54.639 malloc1 00:16:54.639 05:35:58 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:54.898 [2024-10-07 05:35:58.688064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:54.898 [2024-10-07 05:35:58.688172] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.898 [2024-10-07 05:35:58.688210] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:16:54.898 [2024-10-07 05:35:58.688266] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.898 [2024-10-07 05:35:58.690644] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.898 [2024-10-07 05:35:58.690692] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:54.898 pt1 00:16:54.898 05:35:58 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:54.898 05:35:58 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:54.898 05:35:58 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:54.898 05:35:58 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:54.898 05:35:58 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:54.898 05:35:58 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:54.898 05:35:58 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:54.898 05:35:58 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:54.898 05:35:58 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:55.159 malloc2 00:16:55.159 05:35:58 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:55.159 [2024-10-07 05:35:59.089032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:55.159 [2024-10-07 05:35:59.089104] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.159 [2024-10-07 05:35:59.089149] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:55.159 [2024-10-07 05:35:59.089206] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.159 [2024-10-07 05:35:59.091474] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.159 [2024-10-07 05:35:59.091520] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:55.159 pt2 00:16:55.159 05:35:59 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:55.159 05:35:59 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:55.159 05:35:59 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:16:55.159 05:35:59 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:16:55.159 05:35:59 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:55.159 05:35:59 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:55.159 05:35:59 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:55.159 05:35:59 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:55.159 05:35:59 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:55.418 malloc3 00:16:55.418 05:35:59 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:55.678 [2024-10-07 05:35:59.478459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:55.678 [2024-10-07 05:35:59.478550] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.678 [2024-10-07 05:35:59.478603] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:16:55.678 [2024-10-07 05:35:59.478682] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.678 [2024-10-07 05:35:59.480980] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.678 [2024-10-07 05:35:59.481038] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:55.678 pt3 00:16:55.678 05:35:59 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:55.678 05:35:59 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:55.678 05:35:59 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:16:55.937 [2024-10-07 05:35:59.730540] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:55.937 [2024-10-07 05:35:59.732693] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:55.937 [2024-10-07 05:35:59.732782] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:55.937 [2024-10-07 05:35:59.733022] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:16:55.937 [2024-10-07 05:35:59.733046] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:55.937 [2024-10-07 05:35:59.733189] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:16:55.937 [2024-10-07 05:35:59.733598] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:16:55.937 [2024-10-07 05:35:59.733620] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:16:55.937 [2024-10-07 05:35:59.733794] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.937 05:35:59 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:55.937 05:35:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:55.937 05:35:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:55.937 05:35:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:55.937 05:35:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:55.937 05:35:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:55.937 05:35:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:55.937 05:35:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:55.937 05:35:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:55.937 05:35:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:55.937 05:35:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.937 05:35:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.196 05:36:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:56.196 "name": "raid_bdev1", 00:16:56.196 "uuid": "3590c0be-f230-46f5-ba95-9c037937123c", 00:16:56.196 "strip_size_kb": 64, 00:16:56.196 "state": "online", 00:16:56.196 "raid_level": "concat", 00:16:56.196 "superblock": true, 00:16:56.196 "num_base_bdevs": 3, 00:16:56.196 "num_base_bdevs_discovered": 3, 00:16:56.196 "num_base_bdevs_operational": 3, 00:16:56.196 "base_bdevs_list": [ 00:16:56.196 { 00:16:56.196 "name": "pt1", 00:16:56.196 "uuid": "52f42e00-611d-5121-b0ce-6ccbc73812ca", 00:16:56.196 "is_configured": true, 00:16:56.196 "data_offset": 2048, 00:16:56.196 "data_size": 63488 00:16:56.196 }, 00:16:56.196 { 00:16:56.196 "name": "pt2", 00:16:56.196 "uuid": "3572d40e-cd52-5bb4-ac2f-3ca062f45fff", 00:16:56.196 "is_configured": true, 00:16:56.196 "data_offset": 2048, 00:16:56.196 "data_size": 63488 00:16:56.196 }, 00:16:56.196 { 00:16:56.196 "name": "pt3", 00:16:56.196 "uuid": "60337127-5536-56c3-81c7-49dae275820c", 00:16:56.196 "is_configured": true, 00:16:56.196 "data_offset": 2048, 00:16:56.196 "data_size": 63488 00:16:56.196 } 00:16:56.196 ] 00:16:56.196 }' 00:16:56.196 05:36:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:56.196 05:36:00 -- common/autotest_common.sh@10 -- # set +x 00:16:56.764 05:36:00 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:56.764 05:36:00 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:57.023 [2024-10-07 05:36:00.786957] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:57.023 05:36:00 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=3590c0be-f230-46f5-ba95-9c037937123c 00:16:57.023 05:36:00 -- bdev/bdev_raid.sh@380 -- # '[' -z 3590c0be-f230-46f5-ba95-9c037937123c ']' 00:16:57.023 05:36:00 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:57.283 [2024-10-07 05:36:01.102767] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:57.283 [2024-10-07 05:36:01.102793] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:57.283 [2024-10-07 05:36:01.102878] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:57.283 [2024-10-07 05:36:01.102965] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:57.283 [2024-10-07 05:36:01.102993] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:16:57.283 05:36:01 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.283 05:36:01 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:57.541 05:36:01 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:57.541 05:36:01 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:57.542 05:36:01 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:57.542 05:36:01 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:57.800 05:36:01 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:57.800 05:36:01 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:58.059 05:36:01 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:58.059 05:36:01 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:58.059 05:36:02 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:58.059 05:36:02 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:58.318 05:36:02 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:58.318 05:36:02 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:58.318 05:36:02 -- common/autotest_common.sh@640 -- # local es=0 00:16:58.318 05:36:02 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:58.318 05:36:02 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:58.318 05:36:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:58.318 05:36:02 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:58.318 05:36:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:58.318 05:36:02 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:58.318 05:36:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:58.318 05:36:02 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:58.318 05:36:02 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:58.318 05:36:02 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:58.577 [2024-10-07 05:36:02.463166] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:58.577 [2024-10-07 05:36:02.465103] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:58.577 [2024-10-07 05:36:02.465156] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:58.577 [2024-10-07 05:36:02.465218] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:58.577 [2024-10-07 05:36:02.465309] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:58.577 [2024-10-07 05:36:02.465356] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:16:58.577 [2024-10-07 05:36:02.465404] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.577 [2024-10-07 05:36:02.465416] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring 00:16:58.577 request: 00:16:58.577 { 00:16:58.577 "name": "raid_bdev1", 00:16:58.577 "raid_level": "concat", 00:16:58.577 "base_bdevs": [ 00:16:58.578 "malloc1", 00:16:58.578 "malloc2", 00:16:58.578 "malloc3" 00:16:58.578 ], 00:16:58.578 "superblock": false, 00:16:58.578 "strip_size_kb": 64, 00:16:58.578 "method": "bdev_raid_create", 00:16:58.578 "req_id": 1 00:16:58.578 } 00:16:58.578 Got JSON-RPC error response 00:16:58.578 response: 00:16:58.578 { 00:16:58.578 "code": -17, 00:16:58.578 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:58.578 } 00:16:58.578 05:36:02 -- common/autotest_common.sh@643 -- # es=1 00:16:58.578 05:36:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:58.578 05:36:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:58.578 05:36:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:58.578 05:36:02 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.578 05:36:02 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:58.837 05:36:02 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:58.837 05:36:02 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:58.837 05:36:02 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:59.096 [2024-10-07 05:36:02.855165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:59.096 [2024-10-07 05:36:02.855248] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.096 [2024-10-07 05:36:02.855293] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:59.096 [2024-10-07 05:36:02.855318] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.096 [2024-10-07 05:36:02.857730] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.096 [2024-10-07 05:36:02.857797] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:59.096 [2024-10-07 05:36:02.857922] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:59.096 [2024-10-07 05:36:02.858002] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:59.096 pt1 00:16:59.096 05:36:02 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:59.096 05:36:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:59.096 05:36:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:59.096 05:36:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:59.096 05:36:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:59.096 05:36:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:59.096 05:36:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:59.096 05:36:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:59.096 05:36:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:59.096 05:36:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:59.096 05:36:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.096 05:36:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.374 05:36:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:59.374 "name": "raid_bdev1", 00:16:59.374 "uuid": "3590c0be-f230-46f5-ba95-9c037937123c", 00:16:59.374 "strip_size_kb": 64, 00:16:59.374 "state": "configuring", 00:16:59.374 "raid_level": "concat", 00:16:59.374 "superblock": true, 00:16:59.374 "num_base_bdevs": 3, 00:16:59.374 "num_base_bdevs_discovered": 1, 00:16:59.374 "num_base_bdevs_operational": 3, 00:16:59.374 "base_bdevs_list": [ 00:16:59.374 { 00:16:59.374 "name": "pt1", 00:16:59.374 "uuid": "52f42e00-611d-5121-b0ce-6ccbc73812ca", 00:16:59.374 "is_configured": true, 00:16:59.374 "data_offset": 2048, 00:16:59.374 "data_size": 63488 00:16:59.374 }, 00:16:59.374 { 00:16:59.374 "name": null, 00:16:59.374 "uuid": "3572d40e-cd52-5bb4-ac2f-3ca062f45fff", 00:16:59.374 "is_configured": false, 00:16:59.374 "data_offset": 2048, 00:16:59.374 "data_size": 63488 00:16:59.374 }, 00:16:59.374 { 00:16:59.374 "name": null, 00:16:59.374 "uuid": "60337127-5536-56c3-81c7-49dae275820c", 00:16:59.374 "is_configured": false, 00:16:59.374 "data_offset": 2048, 00:16:59.374 "data_size": 63488 00:16:59.374 } 00:16:59.374 ] 00:16:59.374 }' 00:16:59.374 05:36:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:59.374 05:36:03 -- common/autotest_common.sh@10 -- # set +x 00:16:59.942 05:36:03 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:16:59.942 05:36:03 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:00.203 [2024-10-07 05:36:04.035867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:00.203 [2024-10-07 05:36:04.035966] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.203 [2024-10-07 05:36:04.036028] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:00.203 [2024-10-07 05:36:04.036055] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.203 [2024-10-07 05:36:04.036574] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.203 [2024-10-07 05:36:04.036614] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:00.203 [2024-10-07 05:36:04.036742] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:00.203 [2024-10-07 05:36:04.036768] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:00.203 pt2 00:17:00.203 05:36:04 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:00.481 [2024-10-07 05:36:04.223948] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:00.481 05:36:04 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:17:00.481 05:36:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:00.481 05:36:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:00.481 05:36:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:00.481 05:36:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:00.481 05:36:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:00.481 05:36:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:00.481 05:36:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:00.481 05:36:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:00.481 05:36:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:00.481 05:36:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.481 05:36:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.751 05:36:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:00.751 "name": "raid_bdev1", 00:17:00.751 "uuid": "3590c0be-f230-46f5-ba95-9c037937123c", 00:17:00.751 "strip_size_kb": 64, 00:17:00.751 "state": "configuring", 00:17:00.751 "raid_level": "concat", 00:17:00.751 "superblock": true, 00:17:00.751 "num_base_bdevs": 3, 00:17:00.751 "num_base_bdevs_discovered": 1, 00:17:00.751 "num_base_bdevs_operational": 3, 00:17:00.751 "base_bdevs_list": [ 00:17:00.751 { 00:17:00.751 "name": "pt1", 00:17:00.751 "uuid": "52f42e00-611d-5121-b0ce-6ccbc73812ca", 00:17:00.751 "is_configured": true, 00:17:00.751 "data_offset": 2048, 00:17:00.751 "data_size": 63488 00:17:00.751 }, 00:17:00.751 { 00:17:00.751 "name": null, 00:17:00.751 "uuid": "3572d40e-cd52-5bb4-ac2f-3ca062f45fff", 00:17:00.751 "is_configured": false, 00:17:00.751 "data_offset": 2048, 00:17:00.751 "data_size": 63488 00:17:00.751 }, 00:17:00.751 { 00:17:00.751 "name": null, 00:17:00.751 "uuid": "60337127-5536-56c3-81c7-49dae275820c", 00:17:00.751 "is_configured": false, 00:17:00.751 "data_offset": 2048, 00:17:00.751 "data_size": 63488 00:17:00.751 } 00:17:00.751 ] 00:17:00.751 }' 00:17:00.751 05:36:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:00.751 05:36:04 -- common/autotest_common.sh@10 -- # set +x 00:17:01.318 05:36:05 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:01.318 05:36:05 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:01.318 05:36:05 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:01.576 [2024-10-07 05:36:05.296126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:01.576 [2024-10-07 05:36:05.296227] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.576 [2024-10-07 05:36:05.296273] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:01.576 [2024-10-07 05:36:05.296307] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.576 [2024-10-07 05:36:05.296960] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.576 [2024-10-07 05:36:05.297003] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:01.576 [2024-10-07 05:36:05.297159] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:01.576 [2024-10-07 05:36:05.297188] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:01.576 pt2 00:17:01.576 05:36:05 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:01.576 05:36:05 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:01.576 05:36:05 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:01.835 [2024-10-07 05:36:05.572147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:01.835 [2024-10-07 05:36:05.572201] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.835 [2024-10-07 05:36:05.572233] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:01.835 [2024-10-07 05:36:05.572260] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.835 [2024-10-07 05:36:05.572592] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.835 [2024-10-07 05:36:05.572626] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:01.835 [2024-10-07 05:36:05.572718] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:01.835 [2024-10-07 05:36:05.572740] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:01.835 [2024-10-07 05:36:05.572856] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:17:01.835 [2024-10-07 05:36:05.572870] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:01.835 [2024-10-07 05:36:05.572967] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:01.835 [2024-10-07 05:36:05.573253] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:17:01.835 [2024-10-07 05:36:05.573274] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:17:01.835 [2024-10-07 05:36:05.573395] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.835 pt3 00:17:01.835 05:36:05 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:01.835 05:36:05 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:01.835 05:36:05 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:17:01.835 05:36:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:01.835 05:36:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:01.835 05:36:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:01.835 05:36:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:01.835 05:36:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:01.835 05:36:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:01.835 05:36:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:01.835 05:36:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:01.835 05:36:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:01.835 05:36:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.835 05:36:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.835 05:36:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:01.835 "name": "raid_bdev1", 00:17:01.835 "uuid": "3590c0be-f230-46f5-ba95-9c037937123c", 00:17:01.835 "strip_size_kb": 64, 00:17:01.835 "state": "online", 00:17:01.835 "raid_level": "concat", 00:17:01.835 "superblock": true, 00:17:01.835 "num_base_bdevs": 3, 00:17:01.835 "num_base_bdevs_discovered": 3, 00:17:01.835 "num_base_bdevs_operational": 3, 00:17:01.835 "base_bdevs_list": [ 00:17:01.835 { 00:17:01.835 "name": "pt1", 00:17:01.835 "uuid": "52f42e00-611d-5121-b0ce-6ccbc73812ca", 00:17:01.835 "is_configured": true, 00:17:01.835 "data_offset": 2048, 00:17:01.835 "data_size": 63488 00:17:01.835 }, 00:17:01.835 { 00:17:01.835 "name": "pt2", 00:17:01.835 "uuid": "3572d40e-cd52-5bb4-ac2f-3ca062f45fff", 00:17:01.835 "is_configured": true, 00:17:01.835 "data_offset": 2048, 00:17:01.835 "data_size": 63488 00:17:01.835 }, 00:17:01.835 { 00:17:01.835 "name": "pt3", 00:17:01.835 "uuid": "60337127-5536-56c3-81c7-49dae275820c", 00:17:01.835 "is_configured": true, 00:17:01.835 "data_offset": 2048, 00:17:01.835 "data_size": 63488 00:17:01.835 } 00:17:01.835 ] 00:17:01.835 }' 00:17:01.835 05:36:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:01.835 05:36:05 -- common/autotest_common.sh@10 -- # set +x 00:17:02.404 05:36:06 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:02.404 05:36:06 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:02.662 [2024-10-07 05:36:06.526699] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.662 05:36:06 -- bdev/bdev_raid.sh@430 -- # '[' 3590c0be-f230-46f5-ba95-9c037937123c '!=' 3590c0be-f230-46f5-ba95-9c037937123c ']' 00:17:02.662 05:36:06 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:17:02.662 05:36:06 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:02.662 05:36:06 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:02.662 05:36:06 -- bdev/bdev_raid.sh@511 -- # killprocess 145755 00:17:02.662 05:36:06 -- common/autotest_common.sh@926 -- # '[' -z 145755 ']' 00:17:02.662 05:36:06 -- common/autotest_common.sh@930 -- # kill -0 145755 00:17:02.662 05:36:06 -- common/autotest_common.sh@931 -- # uname 00:17:02.662 05:36:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:02.662 05:36:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 145755 00:17:02.662 05:36:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:02.662 05:36:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:02.662 05:36:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 145755' 00:17:02.662 killing process with pid 145755 00:17:02.662 05:36:06 -- common/autotest_common.sh@945 -- # kill 145755 00:17:02.663 [2024-10-07 05:36:06.567347] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:02.663 05:36:06 -- common/autotest_common.sh@950 -- # wait 145755 00:17:02.663 [2024-10-07 05:36:06.567418] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.663 [2024-10-07 05:36:06.567476] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.663 [2024-10-07 05:36:06.567486] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:17:02.920 [2024-10-07 05:36:06.755375] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:03.855 05:36:07 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:03.855 00:17:03.855 real 0m10.325s 00:17:03.855 user 0m17.848s 00:17:03.855 sys 0m1.387s 00:17:03.855 05:36:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:03.855 05:36:07 -- common/autotest_common.sh@10 -- # set +x 00:17:03.855 ************************************ 00:17:03.855 END TEST raid_superblock_test 00:17:03.855 ************************************ 00:17:03.855 05:36:07 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:03.855 05:36:07 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:17:03.855 05:36:07 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:03.855 05:36:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:03.855 05:36:07 -- common/autotest_common.sh@10 -- # set +x 00:17:03.855 ************************************ 00:17:03.855 START TEST raid_state_function_test 00:17:03.855 ************************************ 00:17:03.855 05:36:07 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 false 00:17:03.855 05:36:07 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:17:03.855 05:36:07 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:03.855 05:36:07 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:03.855 05:36:07 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:03.855 05:36:07 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:03.855 05:36:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:03.855 05:36:07 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:03.855 05:36:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:03.855 05:36:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:03.855 05:36:07 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:03.855 05:36:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:03.855 05:36:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:03.855 05:36:07 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:03.855 05:36:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:03.855 05:36:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:03.855 05:36:07 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:03.855 05:36:07 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:03.855 05:36:07 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:03.856 05:36:07 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:03.856 05:36:07 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:03.856 05:36:07 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:03.856 05:36:07 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:17:03.856 05:36:07 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:17:03.856 05:36:07 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:03.856 05:36:07 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:03.856 05:36:07 -- bdev/bdev_raid.sh@226 -- # raid_pid=146403 00:17:03.856 Process raid pid: 146403 00:17:03.856 05:36:07 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 146403' 00:17:03.856 05:36:07 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:03.856 05:36:07 -- bdev/bdev_raid.sh@228 -- # waitforlisten 146403 /var/tmp/spdk-raid.sock 00:17:03.856 05:36:07 -- common/autotest_common.sh@819 -- # '[' -z 146403 ']' 00:17:03.856 05:36:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:03.856 05:36:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:03.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:03.856 05:36:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:03.856 05:36:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:03.856 05:36:07 -- common/autotest_common.sh@10 -- # set +x 00:17:03.856 [2024-10-07 05:36:07.780197] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:03.856 [2024-10-07 05:36:07.780357] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.115 [2024-10-07 05:36:07.937300] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.373 [2024-10-07 05:36:08.132959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.373 [2024-10-07 05:36:08.326783] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:04.939 05:36:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:04.939 05:36:08 -- common/autotest_common.sh@852 -- # return 0 00:17:04.939 05:36:08 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:04.939 [2024-10-07 05:36:08.845027] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:04.939 [2024-10-07 05:36:08.845142] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:04.939 [2024-10-07 05:36:08.845157] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:04.940 [2024-10-07 05:36:08.845177] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:04.940 [2024-10-07 05:36:08.845184] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:04.940 [2024-10-07 05:36:08.845229] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:04.940 05:36:08 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:04.940 05:36:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:04.940 05:36:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:04.940 05:36:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:04.940 05:36:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:04.940 05:36:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:04.940 05:36:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:04.940 05:36:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:04.940 05:36:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:04.940 05:36:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:04.940 05:36:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.940 05:36:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.199 05:36:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:05.199 "name": "Existed_Raid", 00:17:05.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.199 "strip_size_kb": 0, 00:17:05.199 "state": "configuring", 00:17:05.199 "raid_level": "raid1", 00:17:05.199 "superblock": false, 00:17:05.199 "num_base_bdevs": 3, 00:17:05.199 "num_base_bdevs_discovered": 0, 00:17:05.199 "num_base_bdevs_operational": 3, 00:17:05.199 "base_bdevs_list": [ 00:17:05.199 { 00:17:05.199 "name": "BaseBdev1", 00:17:05.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.199 "is_configured": false, 00:17:05.199 "data_offset": 0, 00:17:05.199 "data_size": 0 00:17:05.199 }, 00:17:05.199 { 00:17:05.199 "name": "BaseBdev2", 00:17:05.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.199 "is_configured": false, 00:17:05.199 "data_offset": 0, 00:17:05.199 "data_size": 0 00:17:05.199 }, 00:17:05.199 { 00:17:05.199 "name": "BaseBdev3", 00:17:05.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.199 "is_configured": false, 00:17:05.199 "data_offset": 0, 00:17:05.199 "data_size": 0 00:17:05.199 } 00:17:05.199 ] 00:17:05.199 }' 00:17:05.199 05:36:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:05.199 05:36:09 -- common/autotest_common.sh@10 -- # set +x 00:17:05.767 05:36:09 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:06.025 [2024-10-07 05:36:09.913156] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:06.025 [2024-10-07 05:36:09.913206] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:17:06.025 05:36:09 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:06.283 [2024-10-07 05:36:10.189188] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:06.283 [2024-10-07 05:36:10.189259] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:06.283 [2024-10-07 05:36:10.189273] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:06.283 [2024-10-07 05:36:10.189302] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:06.283 [2024-10-07 05:36:10.189310] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:06.283 [2024-10-07 05:36:10.189336] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:06.283 05:36:10 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:06.541 [2024-10-07 05:36:10.493310] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:06.541 BaseBdev1 00:17:06.541 05:36:10 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:06.541 05:36:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:06.541 05:36:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:06.541 05:36:10 -- common/autotest_common.sh@889 -- # local i 00:17:06.541 05:36:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:06.541 05:36:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:06.541 05:36:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:07.109 05:36:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:07.109 [ 00:17:07.109 { 00:17:07.109 "name": "BaseBdev1", 00:17:07.109 "aliases": [ 00:17:07.109 "46e1dd30-0206-4ce2-9617-22790d11f8f5" 00:17:07.109 ], 00:17:07.109 "product_name": "Malloc disk", 00:17:07.109 "block_size": 512, 00:17:07.109 "num_blocks": 65536, 00:17:07.109 "uuid": "46e1dd30-0206-4ce2-9617-22790d11f8f5", 00:17:07.109 "assigned_rate_limits": { 00:17:07.109 "rw_ios_per_sec": 0, 00:17:07.109 "rw_mbytes_per_sec": 0, 00:17:07.109 "r_mbytes_per_sec": 0, 00:17:07.109 "w_mbytes_per_sec": 0 00:17:07.109 }, 00:17:07.109 "claimed": true, 00:17:07.109 "claim_type": "exclusive_write", 00:17:07.109 "zoned": false, 00:17:07.109 "supported_io_types": { 00:17:07.109 "read": true, 00:17:07.109 "write": true, 00:17:07.109 "unmap": true, 00:17:07.109 "write_zeroes": true, 00:17:07.109 "flush": true, 00:17:07.109 "reset": true, 00:17:07.109 "compare": false, 00:17:07.109 "compare_and_write": false, 00:17:07.109 "abort": true, 00:17:07.109 "nvme_admin": false, 00:17:07.109 "nvme_io": false 00:17:07.109 }, 00:17:07.109 "memory_domains": [ 00:17:07.109 { 00:17:07.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.109 "dma_device_type": 2 00:17:07.109 } 00:17:07.109 ], 00:17:07.109 "driver_specific": {} 00:17:07.109 } 00:17:07.109 ] 00:17:07.109 05:36:11 -- common/autotest_common.sh@895 -- # return 0 00:17:07.109 05:36:11 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:07.109 05:36:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:07.109 05:36:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:07.109 05:36:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:07.109 05:36:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:07.109 05:36:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:07.109 05:36:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:07.109 05:36:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:07.109 05:36:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:07.109 05:36:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:07.109 05:36:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.109 05:36:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.368 05:36:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:07.368 "name": "Existed_Raid", 00:17:07.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.368 "strip_size_kb": 0, 00:17:07.368 "state": "configuring", 00:17:07.368 "raid_level": "raid1", 00:17:07.368 "superblock": false, 00:17:07.368 "num_base_bdevs": 3, 00:17:07.368 "num_base_bdevs_discovered": 1, 00:17:07.368 "num_base_bdevs_operational": 3, 00:17:07.368 "base_bdevs_list": [ 00:17:07.368 { 00:17:07.368 "name": "BaseBdev1", 00:17:07.368 "uuid": "46e1dd30-0206-4ce2-9617-22790d11f8f5", 00:17:07.368 "is_configured": true, 00:17:07.368 "data_offset": 0, 00:17:07.368 "data_size": 65536 00:17:07.368 }, 00:17:07.368 { 00:17:07.368 "name": "BaseBdev2", 00:17:07.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.368 "is_configured": false, 00:17:07.368 "data_offset": 0, 00:17:07.368 "data_size": 0 00:17:07.368 }, 00:17:07.368 { 00:17:07.368 "name": "BaseBdev3", 00:17:07.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.368 "is_configured": false, 00:17:07.368 "data_offset": 0, 00:17:07.368 "data_size": 0 00:17:07.368 } 00:17:07.368 ] 00:17:07.368 }' 00:17:07.368 05:36:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:07.368 05:36:11 -- common/autotest_common.sh@10 -- # set +x 00:17:07.936 05:36:11 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:08.195 [2024-10-07 05:36:12.073730] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:08.195 [2024-10-07 05:36:12.073817] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:08.195 05:36:12 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:08.195 05:36:12 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:08.454 [2024-10-07 05:36:12.337823] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:08.454 [2024-10-07 05:36:12.340177] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:08.454 [2024-10-07 05:36:12.340262] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:08.454 [2024-10-07 05:36:12.340275] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:08.454 [2024-10-07 05:36:12.340307] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:08.454 05:36:12 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:08.454 05:36:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:08.454 05:36:12 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:08.454 05:36:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:08.454 05:36:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:08.454 05:36:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:08.454 05:36:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:08.454 05:36:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:08.454 05:36:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:08.454 05:36:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:08.454 05:36:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:08.454 05:36:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:08.454 05:36:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.454 05:36:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.712 05:36:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:08.712 "name": "Existed_Raid", 00:17:08.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.712 "strip_size_kb": 0, 00:17:08.712 "state": "configuring", 00:17:08.712 "raid_level": "raid1", 00:17:08.712 "superblock": false, 00:17:08.712 "num_base_bdevs": 3, 00:17:08.712 "num_base_bdevs_discovered": 1, 00:17:08.712 "num_base_bdevs_operational": 3, 00:17:08.712 "base_bdevs_list": [ 00:17:08.712 { 00:17:08.712 "name": "BaseBdev1", 00:17:08.712 "uuid": "46e1dd30-0206-4ce2-9617-22790d11f8f5", 00:17:08.712 "is_configured": true, 00:17:08.712 "data_offset": 0, 00:17:08.712 "data_size": 65536 00:17:08.712 }, 00:17:08.712 { 00:17:08.712 "name": "BaseBdev2", 00:17:08.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.712 "is_configured": false, 00:17:08.712 "data_offset": 0, 00:17:08.712 "data_size": 0 00:17:08.712 }, 00:17:08.712 { 00:17:08.712 "name": "BaseBdev3", 00:17:08.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.712 "is_configured": false, 00:17:08.712 "data_offset": 0, 00:17:08.712 "data_size": 0 00:17:08.712 } 00:17:08.712 ] 00:17:08.712 }' 00:17:08.712 05:36:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:08.712 05:36:12 -- common/autotest_common.sh@10 -- # set +x 00:17:09.279 05:36:13 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:09.538 [2024-10-07 05:36:13.496760] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:09.538 BaseBdev2 00:17:09.538 05:36:13 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:09.538 05:36:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:09.538 05:36:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:09.538 05:36:13 -- common/autotest_common.sh@889 -- # local i 00:17:09.538 05:36:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:09.538 05:36:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:09.538 05:36:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:09.796 05:36:13 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:10.055 [ 00:17:10.055 { 00:17:10.055 "name": "BaseBdev2", 00:17:10.055 "aliases": [ 00:17:10.055 "6bd2c1b0-65e4-4266-b7f9-b181ea30d572" 00:17:10.055 ], 00:17:10.055 "product_name": "Malloc disk", 00:17:10.055 "block_size": 512, 00:17:10.055 "num_blocks": 65536, 00:17:10.055 "uuid": "6bd2c1b0-65e4-4266-b7f9-b181ea30d572", 00:17:10.055 "assigned_rate_limits": { 00:17:10.055 "rw_ios_per_sec": 0, 00:17:10.055 "rw_mbytes_per_sec": 0, 00:17:10.055 "r_mbytes_per_sec": 0, 00:17:10.055 "w_mbytes_per_sec": 0 00:17:10.055 }, 00:17:10.055 "claimed": true, 00:17:10.055 "claim_type": "exclusive_write", 00:17:10.055 "zoned": false, 00:17:10.055 "supported_io_types": { 00:17:10.055 "read": true, 00:17:10.055 "write": true, 00:17:10.055 "unmap": true, 00:17:10.055 "write_zeroes": true, 00:17:10.055 "flush": true, 00:17:10.055 "reset": true, 00:17:10.055 "compare": false, 00:17:10.055 "compare_and_write": false, 00:17:10.055 "abort": true, 00:17:10.055 "nvme_admin": false, 00:17:10.055 "nvme_io": false 00:17:10.055 }, 00:17:10.055 "memory_domains": [ 00:17:10.055 { 00:17:10.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.055 "dma_device_type": 2 00:17:10.055 } 00:17:10.055 ], 00:17:10.055 "driver_specific": {} 00:17:10.055 } 00:17:10.055 ] 00:17:10.055 05:36:13 -- common/autotest_common.sh@895 -- # return 0 00:17:10.055 05:36:13 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:10.055 05:36:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:10.055 05:36:13 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:10.055 05:36:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:10.055 05:36:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:10.055 05:36:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:10.055 05:36:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:10.055 05:36:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:10.055 05:36:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:10.055 05:36:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:10.055 05:36:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:10.055 05:36:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:10.055 05:36:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.055 05:36:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.313 05:36:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:10.313 "name": "Existed_Raid", 00:17:10.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.313 "strip_size_kb": 0, 00:17:10.313 "state": "configuring", 00:17:10.313 "raid_level": "raid1", 00:17:10.313 "superblock": false, 00:17:10.313 "num_base_bdevs": 3, 00:17:10.313 "num_base_bdevs_discovered": 2, 00:17:10.313 "num_base_bdevs_operational": 3, 00:17:10.313 "base_bdevs_list": [ 00:17:10.313 { 00:17:10.313 "name": "BaseBdev1", 00:17:10.313 "uuid": "46e1dd30-0206-4ce2-9617-22790d11f8f5", 00:17:10.313 "is_configured": true, 00:17:10.313 "data_offset": 0, 00:17:10.313 "data_size": 65536 00:17:10.313 }, 00:17:10.313 { 00:17:10.313 "name": "BaseBdev2", 00:17:10.313 "uuid": "6bd2c1b0-65e4-4266-b7f9-b181ea30d572", 00:17:10.313 "is_configured": true, 00:17:10.313 "data_offset": 0, 00:17:10.313 "data_size": 65536 00:17:10.313 }, 00:17:10.313 { 00:17:10.313 "name": "BaseBdev3", 00:17:10.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.313 "is_configured": false, 00:17:10.313 "data_offset": 0, 00:17:10.313 "data_size": 0 00:17:10.313 } 00:17:10.313 ] 00:17:10.313 }' 00:17:10.314 05:36:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:10.314 05:36:14 -- common/autotest_common.sh@10 -- # set +x 00:17:10.881 05:36:14 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:11.140 [2024-10-07 05:36:15.005667] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:11.140 [2024-10-07 05:36:15.005745] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:17:11.140 [2024-10-07 05:36:15.005755] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:11.140 [2024-10-07 05:36:15.005869] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:17:11.140 [2024-10-07 05:36:15.006292] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:17:11.140 [2024-10-07 05:36:15.006318] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:17:11.140 [2024-10-07 05:36:15.006670] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.140 BaseBdev3 00:17:11.140 05:36:15 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:11.140 05:36:15 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:11.140 05:36:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:11.140 05:36:15 -- common/autotest_common.sh@889 -- # local i 00:17:11.140 05:36:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:11.140 05:36:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:11.140 05:36:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:11.398 05:36:15 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:11.657 [ 00:17:11.657 { 00:17:11.657 "name": "BaseBdev3", 00:17:11.657 "aliases": [ 00:17:11.657 "d05d7a1d-9c9d-4f96-a5f5-a83bebddf60f" 00:17:11.657 ], 00:17:11.657 "product_name": "Malloc disk", 00:17:11.657 "block_size": 512, 00:17:11.657 "num_blocks": 65536, 00:17:11.657 "uuid": "d05d7a1d-9c9d-4f96-a5f5-a83bebddf60f", 00:17:11.657 "assigned_rate_limits": { 00:17:11.657 "rw_ios_per_sec": 0, 00:17:11.657 "rw_mbytes_per_sec": 0, 00:17:11.657 "r_mbytes_per_sec": 0, 00:17:11.657 "w_mbytes_per_sec": 0 00:17:11.657 }, 00:17:11.657 "claimed": true, 00:17:11.657 "claim_type": "exclusive_write", 00:17:11.657 "zoned": false, 00:17:11.657 "supported_io_types": { 00:17:11.657 "read": true, 00:17:11.657 "write": true, 00:17:11.657 "unmap": true, 00:17:11.657 "write_zeroes": true, 00:17:11.657 "flush": true, 00:17:11.657 "reset": true, 00:17:11.657 "compare": false, 00:17:11.657 "compare_and_write": false, 00:17:11.657 "abort": true, 00:17:11.657 "nvme_admin": false, 00:17:11.657 "nvme_io": false 00:17:11.657 }, 00:17:11.657 "memory_domains": [ 00:17:11.657 { 00:17:11.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.657 "dma_device_type": 2 00:17:11.657 } 00:17:11.657 ], 00:17:11.657 "driver_specific": {} 00:17:11.657 } 00:17:11.657 ] 00:17:11.657 05:36:15 -- common/autotest_common.sh@895 -- # return 0 00:17:11.657 05:36:15 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:11.657 05:36:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:11.657 05:36:15 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:11.657 05:36:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:11.657 05:36:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:11.657 05:36:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:11.657 05:36:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:11.657 05:36:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:11.657 05:36:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:11.657 05:36:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:11.657 05:36:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:11.657 05:36:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:11.657 05:36:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.657 05:36:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.915 05:36:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:11.915 "name": "Existed_Raid", 00:17:11.915 "uuid": "fa6c48a8-45ce-471c-b862-7948f274ae39", 00:17:11.915 "strip_size_kb": 0, 00:17:11.915 "state": "online", 00:17:11.915 "raid_level": "raid1", 00:17:11.915 "superblock": false, 00:17:11.915 "num_base_bdevs": 3, 00:17:11.915 "num_base_bdevs_discovered": 3, 00:17:11.916 "num_base_bdevs_operational": 3, 00:17:11.916 "base_bdevs_list": [ 00:17:11.916 { 00:17:11.916 "name": "BaseBdev1", 00:17:11.916 "uuid": "46e1dd30-0206-4ce2-9617-22790d11f8f5", 00:17:11.916 "is_configured": true, 00:17:11.916 "data_offset": 0, 00:17:11.916 "data_size": 65536 00:17:11.916 }, 00:17:11.916 { 00:17:11.916 "name": "BaseBdev2", 00:17:11.916 "uuid": "6bd2c1b0-65e4-4266-b7f9-b181ea30d572", 00:17:11.916 "is_configured": true, 00:17:11.916 "data_offset": 0, 00:17:11.916 "data_size": 65536 00:17:11.916 }, 00:17:11.916 { 00:17:11.916 "name": "BaseBdev3", 00:17:11.916 "uuid": "d05d7a1d-9c9d-4f96-a5f5-a83bebddf60f", 00:17:11.916 "is_configured": true, 00:17:11.916 "data_offset": 0, 00:17:11.916 "data_size": 65536 00:17:11.916 } 00:17:11.916 ] 00:17:11.916 }' 00:17:11.916 05:36:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:11.916 05:36:15 -- common/autotest_common.sh@10 -- # set +x 00:17:12.481 05:36:16 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:12.740 [2024-10-07 05:36:16.602166] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:12.740 05:36:16 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:12.740 05:36:16 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:17:12.740 05:36:16 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:12.740 05:36:16 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:12.740 05:36:16 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:17:12.740 05:36:16 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:12.740 05:36:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:12.740 05:36:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:12.740 05:36:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:12.740 05:36:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:12.740 05:36:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:12.740 05:36:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:12.740 05:36:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:12.740 05:36:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:12.740 05:36:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:12.740 05:36:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.740 05:36:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.999 05:36:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:12.999 "name": "Existed_Raid", 00:17:12.999 "uuid": "fa6c48a8-45ce-471c-b862-7948f274ae39", 00:17:12.999 "strip_size_kb": 0, 00:17:12.999 "state": "online", 00:17:12.999 "raid_level": "raid1", 00:17:12.999 "superblock": false, 00:17:12.999 "num_base_bdevs": 3, 00:17:12.999 "num_base_bdevs_discovered": 2, 00:17:12.999 "num_base_bdevs_operational": 2, 00:17:12.999 "base_bdevs_list": [ 00:17:12.999 { 00:17:12.999 "name": null, 00:17:12.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.999 "is_configured": false, 00:17:12.999 "data_offset": 0, 00:17:12.999 "data_size": 65536 00:17:12.999 }, 00:17:12.999 { 00:17:12.999 "name": "BaseBdev2", 00:17:12.999 "uuid": "6bd2c1b0-65e4-4266-b7f9-b181ea30d572", 00:17:12.999 "is_configured": true, 00:17:12.999 "data_offset": 0, 00:17:12.999 "data_size": 65536 00:17:12.999 }, 00:17:12.999 { 00:17:12.999 "name": "BaseBdev3", 00:17:12.999 "uuid": "d05d7a1d-9c9d-4f96-a5f5-a83bebddf60f", 00:17:12.999 "is_configured": true, 00:17:12.999 "data_offset": 0, 00:17:12.999 "data_size": 65536 00:17:12.999 } 00:17:12.999 ] 00:17:12.999 }' 00:17:12.999 05:36:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:12.999 05:36:16 -- common/autotest_common.sh@10 -- # set +x 00:17:13.565 05:36:17 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:13.565 05:36:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:13.565 05:36:17 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:13.566 05:36:17 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.824 05:36:17 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:13.824 05:36:17 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:13.824 05:36:17 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:14.082 [2024-10-07 05:36:18.007440] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:14.341 05:36:18 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:14.341 05:36:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:14.341 05:36:18 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.341 05:36:18 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:14.599 05:36:18 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:14.599 05:36:18 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:14.599 05:36:18 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:14.599 [2024-10-07 05:36:18.521007] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:14.599 [2024-10-07 05:36:18.521091] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:14.599 [2024-10-07 05:36:18.521176] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:14.856 [2024-10-07 05:36:18.601227] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:14.856 [2024-10-07 05:36:18.601272] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:17:14.856 05:36:18 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:14.856 05:36:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:14.856 05:36:18 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:14.856 05:36:18 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.113 05:36:18 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:15.113 05:36:18 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:15.113 05:36:18 -- bdev/bdev_raid.sh@287 -- # killprocess 146403 00:17:15.113 05:36:18 -- common/autotest_common.sh@926 -- # '[' -z 146403 ']' 00:17:15.114 05:36:18 -- common/autotest_common.sh@930 -- # kill -0 146403 00:17:15.114 05:36:18 -- common/autotest_common.sh@931 -- # uname 00:17:15.114 05:36:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:15.114 05:36:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 146403 00:17:15.114 killing process with pid 146403 00:17:15.114 05:36:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:15.114 05:36:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:15.114 05:36:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 146403' 00:17:15.114 05:36:18 -- common/autotest_common.sh@945 -- # kill 146403 00:17:15.114 05:36:18 -- common/autotest_common.sh@950 -- # wait 146403 00:17:15.114 [2024-10-07 05:36:18.904600] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:15.114 [2024-10-07 05:36:18.904781] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:16.049 05:36:19 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:16.050 00:17:16.050 real 0m12.252s 00:17:16.050 user 0m21.388s 00:17:16.050 sys 0m1.564s 00:17:16.050 05:36:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:16.050 05:36:19 -- common/autotest_common.sh@10 -- # set +x 00:17:16.050 ************************************ 00:17:16.050 END TEST raid_state_function_test 00:17:16.050 ************************************ 00:17:16.050 05:36:20 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:17:16.050 05:36:20 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:16.050 05:36:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:16.050 05:36:20 -- common/autotest_common.sh@10 -- # set +x 00:17:16.309 ************************************ 00:17:16.309 START TEST raid_state_function_test_sb 00:17:16.309 ************************************ 00:17:16.309 05:36:20 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 true 00:17:16.309 05:36:20 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:17:16.309 05:36:20 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:16.309 05:36:20 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:16.309 05:36:20 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:16.309 05:36:20 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:16.309 05:36:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:16.309 05:36:20 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:16.309 05:36:20 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:16.309 05:36:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:16.309 05:36:20 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:16.309 05:36:20 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:16.309 05:36:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:16.309 05:36:20 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:16.309 05:36:20 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:16.309 05:36:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:16.309 05:36:20 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:16.309 05:36:20 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:16.309 05:36:20 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:16.309 05:36:20 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:16.309 05:36:20 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:16.309 05:36:20 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:16.309 05:36:20 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:17:16.309 05:36:20 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:17:16.309 05:36:20 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:16.309 05:36:20 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:16.309 05:36:20 -- bdev/bdev_raid.sh@226 -- # raid_pid=147224 00:17:16.309 05:36:20 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:16.309 05:36:20 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 147224' 00:17:16.309 Process raid pid: 147224 00:17:16.309 05:36:20 -- bdev/bdev_raid.sh@228 -- # waitforlisten 147224 /var/tmp/spdk-raid.sock 00:17:16.309 05:36:20 -- common/autotest_common.sh@819 -- # '[' -z 147224 ']' 00:17:16.309 05:36:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:16.309 05:36:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:16.309 05:36:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:16.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:16.309 05:36:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:16.309 05:36:20 -- common/autotest_common.sh@10 -- # set +x 00:17:16.309 [2024-10-07 05:36:20.101169] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:16.309 [2024-10-07 05:36:20.101356] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.309 [2024-10-07 05:36:20.253525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.569 [2024-10-07 05:36:20.456701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.827 [2024-10-07 05:36:20.652080] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:17.408 05:36:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:17.408 05:36:21 -- common/autotest_common.sh@852 -- # return 0 00:17:17.408 05:36:21 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:17.408 [2024-10-07 05:36:21.325039] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:17.408 [2024-10-07 05:36:21.325122] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:17.408 [2024-10-07 05:36:21.325136] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:17.408 [2024-10-07 05:36:21.325154] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:17.408 [2024-10-07 05:36:21.325161] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:17.408 [2024-10-07 05:36:21.325203] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:17.409 05:36:21 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:17.409 05:36:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:17.409 05:36:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:17.409 05:36:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:17.409 05:36:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:17.409 05:36:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:17.409 05:36:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:17.409 05:36:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:17.409 05:36:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:17.409 05:36:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:17.409 05:36:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.409 05:36:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:17.690 05:36:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:17.690 "name": "Existed_Raid", 00:17:17.690 "uuid": "cc6b6133-943f-41c9-8686-542a2df9f6a0", 00:17:17.690 "strip_size_kb": 0, 00:17:17.690 "state": "configuring", 00:17:17.690 "raid_level": "raid1", 00:17:17.690 "superblock": true, 00:17:17.690 "num_base_bdevs": 3, 00:17:17.690 "num_base_bdevs_discovered": 0, 00:17:17.690 "num_base_bdevs_operational": 3, 00:17:17.690 "base_bdevs_list": [ 00:17:17.690 { 00:17:17.690 "name": "BaseBdev1", 00:17:17.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.690 "is_configured": false, 00:17:17.690 "data_offset": 0, 00:17:17.690 "data_size": 0 00:17:17.690 }, 00:17:17.690 { 00:17:17.690 "name": "BaseBdev2", 00:17:17.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.690 "is_configured": false, 00:17:17.690 "data_offset": 0, 00:17:17.690 "data_size": 0 00:17:17.690 }, 00:17:17.690 { 00:17:17.690 "name": "BaseBdev3", 00:17:17.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.690 "is_configured": false, 00:17:17.690 "data_offset": 0, 00:17:17.690 "data_size": 0 00:17:17.690 } 00:17:17.690 ] 00:17:17.690 }' 00:17:17.690 05:36:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:17.690 05:36:21 -- common/autotest_common.sh@10 -- # set +x 00:17:18.255 05:36:22 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:18.513 [2024-10-07 05:36:22.389082] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:18.513 [2024-10-07 05:36:22.389122] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:17:18.513 05:36:22 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:18.771 [2024-10-07 05:36:22.577171] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:18.771 [2024-10-07 05:36:22.577232] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:18.771 [2024-10-07 05:36:22.577243] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:18.771 [2024-10-07 05:36:22.577271] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:18.771 [2024-10-07 05:36:22.577278] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:18.771 [2024-10-07 05:36:22.577302] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:18.772 05:36:22 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:19.030 [2024-10-07 05:36:22.806942] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:19.030 BaseBdev1 00:17:19.030 05:36:22 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:19.030 05:36:22 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:19.030 05:36:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:19.030 05:36:22 -- common/autotest_common.sh@889 -- # local i 00:17:19.030 05:36:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:19.030 05:36:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:19.030 05:36:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:19.289 05:36:23 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:19.548 [ 00:17:19.548 { 00:17:19.548 "name": "BaseBdev1", 00:17:19.548 "aliases": [ 00:17:19.548 "7ebb572c-7b16-4fb8-a2eb-273b93eb9a16" 00:17:19.548 ], 00:17:19.548 "product_name": "Malloc disk", 00:17:19.548 "block_size": 512, 00:17:19.548 "num_blocks": 65536, 00:17:19.548 "uuid": "7ebb572c-7b16-4fb8-a2eb-273b93eb9a16", 00:17:19.548 "assigned_rate_limits": { 00:17:19.548 "rw_ios_per_sec": 0, 00:17:19.548 "rw_mbytes_per_sec": 0, 00:17:19.548 "r_mbytes_per_sec": 0, 00:17:19.548 "w_mbytes_per_sec": 0 00:17:19.548 }, 00:17:19.548 "claimed": true, 00:17:19.548 "claim_type": "exclusive_write", 00:17:19.548 "zoned": false, 00:17:19.548 "supported_io_types": { 00:17:19.548 "read": true, 00:17:19.548 "write": true, 00:17:19.548 "unmap": true, 00:17:19.548 "write_zeroes": true, 00:17:19.548 "flush": true, 00:17:19.548 "reset": true, 00:17:19.548 "compare": false, 00:17:19.548 "compare_and_write": false, 00:17:19.548 "abort": true, 00:17:19.548 "nvme_admin": false, 00:17:19.548 "nvme_io": false 00:17:19.548 }, 00:17:19.548 "memory_domains": [ 00:17:19.548 { 00:17:19.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.548 "dma_device_type": 2 00:17:19.548 } 00:17:19.548 ], 00:17:19.548 "driver_specific": {} 00:17:19.548 } 00:17:19.548 ] 00:17:19.548 05:36:23 -- common/autotest_common.sh@895 -- # return 0 00:17:19.548 05:36:23 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:19.548 05:36:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:19.548 05:36:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:19.548 05:36:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:19.548 05:36:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:19.548 05:36:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:19.548 05:36:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:19.548 05:36:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:19.548 05:36:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:19.548 05:36:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:19.548 05:36:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.548 05:36:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.548 05:36:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:19.548 "name": "Existed_Raid", 00:17:19.548 "uuid": "ec8c0ea5-d09e-4c29-a6c1-f61111f29b46", 00:17:19.548 "strip_size_kb": 0, 00:17:19.548 "state": "configuring", 00:17:19.548 "raid_level": "raid1", 00:17:19.548 "superblock": true, 00:17:19.548 "num_base_bdevs": 3, 00:17:19.548 "num_base_bdevs_discovered": 1, 00:17:19.548 "num_base_bdevs_operational": 3, 00:17:19.548 "base_bdevs_list": [ 00:17:19.548 { 00:17:19.548 "name": "BaseBdev1", 00:17:19.548 "uuid": "7ebb572c-7b16-4fb8-a2eb-273b93eb9a16", 00:17:19.548 "is_configured": true, 00:17:19.548 "data_offset": 2048, 00:17:19.548 "data_size": 63488 00:17:19.548 }, 00:17:19.548 { 00:17:19.548 "name": "BaseBdev2", 00:17:19.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.548 "is_configured": false, 00:17:19.548 "data_offset": 0, 00:17:19.548 "data_size": 0 00:17:19.548 }, 00:17:19.548 { 00:17:19.548 "name": "BaseBdev3", 00:17:19.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.548 "is_configured": false, 00:17:19.548 "data_offset": 0, 00:17:19.548 "data_size": 0 00:17:19.548 } 00:17:19.548 ] 00:17:19.548 }' 00:17:19.548 05:36:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:19.548 05:36:23 -- common/autotest_common.sh@10 -- # set +x 00:17:20.115 05:36:24 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:20.374 [2024-10-07 05:36:24.235384] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:20.374 [2024-10-07 05:36:24.235446] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:20.374 05:36:24 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:20.374 05:36:24 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:20.633 05:36:24 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:20.891 BaseBdev1 00:17:20.891 05:36:24 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:20.891 05:36:24 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:20.891 05:36:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:20.891 05:36:24 -- common/autotest_common.sh@889 -- # local i 00:17:20.891 05:36:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:20.891 05:36:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:20.891 05:36:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:21.149 05:36:25 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:21.407 [ 00:17:21.407 { 00:17:21.407 "name": "BaseBdev1", 00:17:21.407 "aliases": [ 00:17:21.407 "c8d0ba20-598c-4090-b5e1-db41da2e86ac" 00:17:21.407 ], 00:17:21.407 "product_name": "Malloc disk", 00:17:21.407 "block_size": 512, 00:17:21.407 "num_blocks": 65536, 00:17:21.407 "uuid": "c8d0ba20-598c-4090-b5e1-db41da2e86ac", 00:17:21.407 "assigned_rate_limits": { 00:17:21.407 "rw_ios_per_sec": 0, 00:17:21.407 "rw_mbytes_per_sec": 0, 00:17:21.407 "r_mbytes_per_sec": 0, 00:17:21.407 "w_mbytes_per_sec": 0 00:17:21.407 }, 00:17:21.407 "claimed": false, 00:17:21.407 "zoned": false, 00:17:21.407 "supported_io_types": { 00:17:21.407 "read": true, 00:17:21.407 "write": true, 00:17:21.407 "unmap": true, 00:17:21.407 "write_zeroes": true, 00:17:21.407 "flush": true, 00:17:21.407 "reset": true, 00:17:21.407 "compare": false, 00:17:21.407 "compare_and_write": false, 00:17:21.407 "abort": true, 00:17:21.407 "nvme_admin": false, 00:17:21.407 "nvme_io": false 00:17:21.407 }, 00:17:21.407 "memory_domains": [ 00:17:21.407 { 00:17:21.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.407 "dma_device_type": 2 00:17:21.407 } 00:17:21.407 ], 00:17:21.407 "driver_specific": {} 00:17:21.407 } 00:17:21.407 ] 00:17:21.407 05:36:25 -- common/autotest_common.sh@895 -- # return 0 00:17:21.407 05:36:25 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:21.666 [2024-10-07 05:36:25.414266] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:21.666 [2024-10-07 05:36:25.416404] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:21.666 [2024-10-07 05:36:25.416467] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:21.666 [2024-10-07 05:36:25.416480] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:21.666 [2024-10-07 05:36:25.416507] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:21.666 05:36:25 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:21.666 05:36:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:21.666 05:36:25 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:21.666 05:36:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:21.666 05:36:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:21.666 05:36:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:21.666 05:36:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:21.666 05:36:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:21.666 05:36:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:21.666 05:36:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:21.666 05:36:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:21.666 05:36:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:21.666 05:36:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.666 05:36:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.925 05:36:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:21.925 "name": "Existed_Raid", 00:17:21.925 "uuid": "1508fa10-3125-4a6a-a4f6-3a292836934e", 00:17:21.925 "strip_size_kb": 0, 00:17:21.925 "state": "configuring", 00:17:21.925 "raid_level": "raid1", 00:17:21.925 "superblock": true, 00:17:21.925 "num_base_bdevs": 3, 00:17:21.925 "num_base_bdevs_discovered": 1, 00:17:21.925 "num_base_bdevs_operational": 3, 00:17:21.925 "base_bdevs_list": [ 00:17:21.925 { 00:17:21.925 "name": "BaseBdev1", 00:17:21.925 "uuid": "c8d0ba20-598c-4090-b5e1-db41da2e86ac", 00:17:21.925 "is_configured": true, 00:17:21.925 "data_offset": 2048, 00:17:21.925 "data_size": 63488 00:17:21.925 }, 00:17:21.925 { 00:17:21.925 "name": "BaseBdev2", 00:17:21.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.925 "is_configured": false, 00:17:21.925 "data_offset": 0, 00:17:21.925 "data_size": 0 00:17:21.925 }, 00:17:21.925 { 00:17:21.925 "name": "BaseBdev3", 00:17:21.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.925 "is_configured": false, 00:17:21.925 "data_offset": 0, 00:17:21.925 "data_size": 0 00:17:21.925 } 00:17:21.925 ] 00:17:21.925 }' 00:17:21.925 05:36:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:21.925 05:36:25 -- common/autotest_common.sh@10 -- # set +x 00:17:22.492 05:36:26 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:22.750 [2024-10-07 05:36:26.623915] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:22.750 BaseBdev2 00:17:22.750 05:36:26 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:22.750 05:36:26 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:22.750 05:36:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:22.750 05:36:26 -- common/autotest_common.sh@889 -- # local i 00:17:22.750 05:36:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:22.750 05:36:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:22.750 05:36:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:23.008 05:36:26 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:23.267 [ 00:17:23.267 { 00:17:23.267 "name": "BaseBdev2", 00:17:23.267 "aliases": [ 00:17:23.267 "38b4e1d7-2fc2-4e63-bd1f-18150fe6e5fe" 00:17:23.267 ], 00:17:23.267 "product_name": "Malloc disk", 00:17:23.267 "block_size": 512, 00:17:23.267 "num_blocks": 65536, 00:17:23.267 "uuid": "38b4e1d7-2fc2-4e63-bd1f-18150fe6e5fe", 00:17:23.267 "assigned_rate_limits": { 00:17:23.267 "rw_ios_per_sec": 0, 00:17:23.267 "rw_mbytes_per_sec": 0, 00:17:23.267 "r_mbytes_per_sec": 0, 00:17:23.267 "w_mbytes_per_sec": 0 00:17:23.267 }, 00:17:23.267 "claimed": true, 00:17:23.267 "claim_type": "exclusive_write", 00:17:23.267 "zoned": false, 00:17:23.267 "supported_io_types": { 00:17:23.267 "read": true, 00:17:23.267 "write": true, 00:17:23.267 "unmap": true, 00:17:23.267 "write_zeroes": true, 00:17:23.267 "flush": true, 00:17:23.267 "reset": true, 00:17:23.267 "compare": false, 00:17:23.267 "compare_and_write": false, 00:17:23.267 "abort": true, 00:17:23.267 "nvme_admin": false, 00:17:23.267 "nvme_io": false 00:17:23.267 }, 00:17:23.267 "memory_domains": [ 00:17:23.267 { 00:17:23.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.267 "dma_device_type": 2 00:17:23.267 } 00:17:23.267 ], 00:17:23.267 "driver_specific": {} 00:17:23.267 } 00:17:23.267 ] 00:17:23.267 05:36:27 -- common/autotest_common.sh@895 -- # return 0 00:17:23.267 05:36:27 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:23.267 05:36:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:23.267 05:36:27 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:23.267 05:36:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:23.267 05:36:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:23.267 05:36:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:23.267 05:36:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:23.267 05:36:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:23.267 05:36:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:23.267 05:36:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:23.267 05:36:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:23.267 05:36:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:23.267 05:36:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.267 05:36:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.267 05:36:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:23.267 "name": "Existed_Raid", 00:17:23.267 "uuid": "1508fa10-3125-4a6a-a4f6-3a292836934e", 00:17:23.267 "strip_size_kb": 0, 00:17:23.267 "state": "configuring", 00:17:23.267 "raid_level": "raid1", 00:17:23.267 "superblock": true, 00:17:23.267 "num_base_bdevs": 3, 00:17:23.267 "num_base_bdevs_discovered": 2, 00:17:23.267 "num_base_bdevs_operational": 3, 00:17:23.267 "base_bdevs_list": [ 00:17:23.267 { 00:17:23.267 "name": "BaseBdev1", 00:17:23.267 "uuid": "c8d0ba20-598c-4090-b5e1-db41da2e86ac", 00:17:23.267 "is_configured": true, 00:17:23.267 "data_offset": 2048, 00:17:23.267 "data_size": 63488 00:17:23.267 }, 00:17:23.267 { 00:17:23.267 "name": "BaseBdev2", 00:17:23.267 "uuid": "38b4e1d7-2fc2-4e63-bd1f-18150fe6e5fe", 00:17:23.267 "is_configured": true, 00:17:23.267 "data_offset": 2048, 00:17:23.267 "data_size": 63488 00:17:23.267 }, 00:17:23.267 { 00:17:23.267 "name": "BaseBdev3", 00:17:23.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.267 "is_configured": false, 00:17:23.267 "data_offset": 0, 00:17:23.267 "data_size": 0 00:17:23.267 } 00:17:23.267 ] 00:17:23.267 }' 00:17:23.267 05:36:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:23.267 05:36:27 -- common/autotest_common.sh@10 -- # set +x 00:17:24.202 05:36:27 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:24.202 [2024-10-07 05:36:28.123530] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:24.202 [2024-10-07 05:36:28.123795] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:17:24.202 [2024-10-07 05:36:28.123844] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:24.202 [2024-10-07 05:36:28.124012] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:17:24.202 [2024-10-07 05:36:28.124461] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:17:24.202 [2024-10-07 05:36:28.124487] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:17:24.202 [2024-10-07 05:36:28.124702] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.202 BaseBdev3 00:17:24.202 05:36:28 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:24.202 05:36:28 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:24.202 05:36:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:24.202 05:36:28 -- common/autotest_common.sh@889 -- # local i 00:17:24.202 05:36:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:24.202 05:36:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:24.202 05:36:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:24.461 05:36:28 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:24.719 [ 00:17:24.720 { 00:17:24.720 "name": "BaseBdev3", 00:17:24.720 "aliases": [ 00:17:24.720 "8085ccdd-c11c-4dbd-b82a-7d8c76887730" 00:17:24.720 ], 00:17:24.720 "product_name": "Malloc disk", 00:17:24.720 "block_size": 512, 00:17:24.720 "num_blocks": 65536, 00:17:24.720 "uuid": "8085ccdd-c11c-4dbd-b82a-7d8c76887730", 00:17:24.720 "assigned_rate_limits": { 00:17:24.720 "rw_ios_per_sec": 0, 00:17:24.720 "rw_mbytes_per_sec": 0, 00:17:24.720 "r_mbytes_per_sec": 0, 00:17:24.720 "w_mbytes_per_sec": 0 00:17:24.720 }, 00:17:24.720 "claimed": true, 00:17:24.720 "claim_type": "exclusive_write", 00:17:24.720 "zoned": false, 00:17:24.720 "supported_io_types": { 00:17:24.720 "read": true, 00:17:24.720 "write": true, 00:17:24.720 "unmap": true, 00:17:24.720 "write_zeroes": true, 00:17:24.720 "flush": true, 00:17:24.720 "reset": true, 00:17:24.720 "compare": false, 00:17:24.720 "compare_and_write": false, 00:17:24.720 "abort": true, 00:17:24.720 "nvme_admin": false, 00:17:24.720 "nvme_io": false 00:17:24.720 }, 00:17:24.720 "memory_domains": [ 00:17:24.720 { 00:17:24.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.720 "dma_device_type": 2 00:17:24.720 } 00:17:24.720 ], 00:17:24.720 "driver_specific": {} 00:17:24.720 } 00:17:24.720 ] 00:17:24.720 05:36:28 -- common/autotest_common.sh@895 -- # return 0 00:17:24.720 05:36:28 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:24.720 05:36:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:24.720 05:36:28 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:24.720 05:36:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:24.720 05:36:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:24.720 05:36:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:24.720 05:36:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:24.720 05:36:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:24.720 05:36:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:24.720 05:36:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:24.720 05:36:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:24.720 05:36:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:24.720 05:36:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.720 05:36:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.978 05:36:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:24.978 "name": "Existed_Raid", 00:17:24.978 "uuid": "1508fa10-3125-4a6a-a4f6-3a292836934e", 00:17:24.978 "strip_size_kb": 0, 00:17:24.978 "state": "online", 00:17:24.978 "raid_level": "raid1", 00:17:24.978 "superblock": true, 00:17:24.978 "num_base_bdevs": 3, 00:17:24.978 "num_base_bdevs_discovered": 3, 00:17:24.978 "num_base_bdevs_operational": 3, 00:17:24.978 "base_bdevs_list": [ 00:17:24.978 { 00:17:24.978 "name": "BaseBdev1", 00:17:24.978 "uuid": "c8d0ba20-598c-4090-b5e1-db41da2e86ac", 00:17:24.978 "is_configured": true, 00:17:24.978 "data_offset": 2048, 00:17:24.978 "data_size": 63488 00:17:24.978 }, 00:17:24.978 { 00:17:24.978 "name": "BaseBdev2", 00:17:24.978 "uuid": "38b4e1d7-2fc2-4e63-bd1f-18150fe6e5fe", 00:17:24.978 "is_configured": true, 00:17:24.978 "data_offset": 2048, 00:17:24.978 "data_size": 63488 00:17:24.978 }, 00:17:24.978 { 00:17:24.978 "name": "BaseBdev3", 00:17:24.978 "uuid": "8085ccdd-c11c-4dbd-b82a-7d8c76887730", 00:17:24.978 "is_configured": true, 00:17:24.978 "data_offset": 2048, 00:17:24.978 "data_size": 63488 00:17:24.978 } 00:17:24.978 ] 00:17:24.978 }' 00:17:24.978 05:36:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:24.978 05:36:28 -- common/autotest_common.sh@10 -- # set +x 00:17:25.914 05:36:29 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:25.914 [2024-10-07 05:36:29.864078] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:26.172 05:36:29 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:26.172 05:36:29 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:17:26.172 05:36:29 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:26.172 05:36:29 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:26.172 05:36:29 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:17:26.172 05:36:29 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:26.172 05:36:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:26.172 05:36:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:26.172 05:36:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:26.172 05:36:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:26.172 05:36:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:26.172 05:36:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:26.172 05:36:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:26.172 05:36:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:26.172 05:36:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:26.172 05:36:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.172 05:36:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.431 05:36:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:26.431 "name": "Existed_Raid", 00:17:26.431 "uuid": "1508fa10-3125-4a6a-a4f6-3a292836934e", 00:17:26.431 "strip_size_kb": 0, 00:17:26.431 "state": "online", 00:17:26.431 "raid_level": "raid1", 00:17:26.431 "superblock": true, 00:17:26.431 "num_base_bdevs": 3, 00:17:26.431 "num_base_bdevs_discovered": 2, 00:17:26.431 "num_base_bdevs_operational": 2, 00:17:26.431 "base_bdevs_list": [ 00:17:26.431 { 00:17:26.431 "name": null, 00:17:26.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.431 "is_configured": false, 00:17:26.431 "data_offset": 2048, 00:17:26.431 "data_size": 63488 00:17:26.431 }, 00:17:26.431 { 00:17:26.431 "name": "BaseBdev2", 00:17:26.431 "uuid": "38b4e1d7-2fc2-4e63-bd1f-18150fe6e5fe", 00:17:26.431 "is_configured": true, 00:17:26.431 "data_offset": 2048, 00:17:26.431 "data_size": 63488 00:17:26.431 }, 00:17:26.431 { 00:17:26.431 "name": "BaseBdev3", 00:17:26.431 "uuid": "8085ccdd-c11c-4dbd-b82a-7d8c76887730", 00:17:26.431 "is_configured": true, 00:17:26.431 "data_offset": 2048, 00:17:26.431 "data_size": 63488 00:17:26.431 } 00:17:26.431 ] 00:17:26.431 }' 00:17:26.431 05:36:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:26.431 05:36:30 -- common/autotest_common.sh@10 -- # set +x 00:17:26.998 05:36:30 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:26.998 05:36:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:26.998 05:36:30 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.998 05:36:30 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:27.257 05:36:31 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:27.257 05:36:31 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:27.257 05:36:31 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:27.516 [2024-10-07 05:36:31.417243] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:27.775 05:36:31 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:27.775 05:36:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:27.775 05:36:31 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.775 05:36:31 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:27.775 05:36:31 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:27.775 05:36:31 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:27.775 05:36:31 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:28.034 [2024-10-07 05:36:31.900687] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:28.034 [2024-10-07 05:36:31.900722] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:28.034 [2024-10-07 05:36:31.900801] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:28.034 [2024-10-07 05:36:31.969327] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:28.034 [2024-10-07 05:36:31.969361] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:17:28.034 05:36:31 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:28.034 05:36:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:28.034 05:36:31 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.034 05:36:31 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:28.293 05:36:32 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:28.293 05:36:32 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:28.293 05:36:32 -- bdev/bdev_raid.sh@287 -- # killprocess 147224 00:17:28.293 05:36:32 -- common/autotest_common.sh@926 -- # '[' -z 147224 ']' 00:17:28.293 05:36:32 -- common/autotest_common.sh@930 -- # kill -0 147224 00:17:28.293 05:36:32 -- common/autotest_common.sh@931 -- # uname 00:17:28.293 05:36:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:28.293 05:36:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 147224 00:17:28.293 05:36:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:28.293 05:36:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:28.552 05:36:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 147224' 00:17:28.552 killing process with pid 147224 00:17:28.552 05:36:32 -- common/autotest_common.sh@945 -- # kill 147224 00:17:28.552 [2024-10-07 05:36:32.272486] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:28.552 [2024-10-07 05:36:32.272622] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:28.552 05:36:32 -- common/autotest_common.sh@950 -- # wait 147224 00:17:29.489 ************************************ 00:17:29.489 END TEST raid_state_function_test_sb 00:17:29.489 ************************************ 00:17:29.489 05:36:33 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:29.489 00:17:29.489 real 0m13.290s 00:17:29.489 user 0m23.244s 00:17:29.489 sys 0m1.642s 00:17:29.489 05:36:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:29.489 05:36:33 -- common/autotest_common.sh@10 -- # set +x 00:17:29.489 05:36:33 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:17:29.489 05:36:33 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:17:29.489 05:36:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:29.489 05:36:33 -- common/autotest_common.sh@10 -- # set +x 00:17:29.489 ************************************ 00:17:29.489 START TEST raid_superblock_test 00:17:29.489 ************************************ 00:17:29.489 05:36:33 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 3 00:17:29.489 05:36:33 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:17:29.489 05:36:33 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:17:29.489 05:36:33 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:29.489 05:36:33 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:29.489 05:36:33 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:29.489 05:36:33 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:29.489 05:36:33 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:29.489 05:36:33 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:29.489 05:36:33 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:29.489 05:36:33 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:29.489 05:36:33 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:29.489 05:36:33 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:29.489 05:36:33 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:29.489 05:36:33 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:17:29.489 05:36:33 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:17:29.489 05:36:33 -- bdev/bdev_raid.sh@357 -- # raid_pid=148068 00:17:29.489 05:36:33 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:29.489 05:36:33 -- bdev/bdev_raid.sh@358 -- # waitforlisten 148068 /var/tmp/spdk-raid.sock 00:17:29.489 05:36:33 -- common/autotest_common.sh@819 -- # '[' -z 148068 ']' 00:17:29.489 05:36:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:29.489 05:36:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:29.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:29.489 05:36:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:29.489 05:36:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:29.489 05:36:33 -- common/autotest_common.sh@10 -- # set +x 00:17:29.489 [2024-10-07 05:36:33.443610] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:29.489 [2024-10-07 05:36:33.443852] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148068 ] 00:17:29.747 [2024-10-07 05:36:33.602700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.007 [2024-10-07 05:36:33.857013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.266 [2024-10-07 05:36:34.044836] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:30.525 05:36:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:30.525 05:36:34 -- common/autotest_common.sh@852 -- # return 0 00:17:30.525 05:36:34 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:30.525 05:36:34 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:30.525 05:36:34 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:30.525 05:36:34 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:30.525 05:36:34 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:30.525 05:36:34 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:30.525 05:36:34 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:30.525 05:36:34 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:30.525 05:36:34 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:30.783 malloc1 00:17:30.783 05:36:34 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:31.041 [2024-10-07 05:36:34.891356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:31.041 [2024-10-07 05:36:34.891645] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.041 [2024-10-07 05:36:34.891835] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:17:31.041 [2024-10-07 05:36:34.892009] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.042 [2024-10-07 05:36:34.894362] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.042 [2024-10-07 05:36:34.894588] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:31.042 pt1 00:17:31.042 05:36:34 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:31.042 05:36:34 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:31.042 05:36:34 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:31.042 05:36:34 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:31.042 05:36:34 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:31.042 05:36:34 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:31.042 05:36:34 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:31.042 05:36:34 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:31.042 05:36:34 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:31.300 malloc2 00:17:31.300 05:36:35 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:31.558 [2024-10-07 05:36:35.328423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:31.558 [2024-10-07 05:36:35.328661] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.558 [2024-10-07 05:36:35.328829] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:31.558 [2024-10-07 05:36:35.328989] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.558 [2024-10-07 05:36:35.331579] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.558 [2024-10-07 05:36:35.331777] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:31.558 pt2 00:17:31.558 05:36:35 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:31.558 05:36:35 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:31.558 05:36:35 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:31.558 05:36:35 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:31.558 05:36:35 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:31.558 05:36:35 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:31.558 05:36:35 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:31.558 05:36:35 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:31.558 05:36:35 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:31.817 malloc3 00:17:31.817 05:36:35 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:31.817 [2024-10-07 05:36:35.746117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:31.817 [2024-10-07 05:36:35.746335] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.817 [2024-10-07 05:36:35.746426] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:17:31.817 [2024-10-07 05:36:35.746689] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.817 [2024-10-07 05:36:35.749092] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.817 [2024-10-07 05:36:35.749283] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:31.817 pt3 00:17:31.817 05:36:35 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:31.817 05:36:35 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:31.817 05:36:35 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:32.076 [2024-10-07 05:36:36.018205] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:32.076 [2024-10-07 05:36:36.020335] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:32.076 [2024-10-07 05:36:36.020530] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:32.076 [2024-10-07 05:36:36.020776] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:17:32.076 [2024-10-07 05:36:36.020891] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:32.076 [2024-10-07 05:36:36.021048] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:17:32.076 [2024-10-07 05:36:36.021593] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:17:32.076 [2024-10-07 05:36:36.021758] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:17:32.076 [2024-10-07 05:36:36.022005] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.076 05:36:36 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:32.076 05:36:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:32.076 05:36:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:32.076 05:36:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:32.076 05:36:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:32.076 05:36:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:32.076 05:36:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:32.076 05:36:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:32.076 05:36:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:32.076 05:36:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:32.076 05:36:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.076 05:36:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.643 05:36:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:32.643 "name": "raid_bdev1", 00:17:32.643 "uuid": "810f4fb5-0a2e-4b03-9621-0f6ecb14873b", 00:17:32.643 "strip_size_kb": 0, 00:17:32.643 "state": "online", 00:17:32.643 "raid_level": "raid1", 00:17:32.643 "superblock": true, 00:17:32.643 "num_base_bdevs": 3, 00:17:32.643 "num_base_bdevs_discovered": 3, 00:17:32.643 "num_base_bdevs_operational": 3, 00:17:32.643 "base_bdevs_list": [ 00:17:32.643 { 00:17:32.643 "name": "pt1", 00:17:32.643 "uuid": "620bacaa-8ba6-5a24-af71-9c8567b4a004", 00:17:32.643 "is_configured": true, 00:17:32.643 "data_offset": 2048, 00:17:32.643 "data_size": 63488 00:17:32.643 }, 00:17:32.643 { 00:17:32.643 "name": "pt2", 00:17:32.643 "uuid": "c7e90ece-4425-5bb3-9cb1-e3a0a352e959", 00:17:32.643 "is_configured": true, 00:17:32.643 "data_offset": 2048, 00:17:32.643 "data_size": 63488 00:17:32.643 }, 00:17:32.643 { 00:17:32.643 "name": "pt3", 00:17:32.643 "uuid": "7bdaf9f2-1329-5c64-893c-0c773c9b8b37", 00:17:32.643 "is_configured": true, 00:17:32.643 "data_offset": 2048, 00:17:32.643 "data_size": 63488 00:17:32.643 } 00:17:32.643 ] 00:17:32.643 }' 00:17:32.643 05:36:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:32.643 05:36:36 -- common/autotest_common.sh@10 -- # set +x 00:17:33.209 05:36:36 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:33.209 05:36:36 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:33.209 [2024-10-07 05:36:37.162872] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:33.209 05:36:37 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=810f4fb5-0a2e-4b03-9621-0f6ecb14873b 00:17:33.209 05:36:37 -- bdev/bdev_raid.sh@380 -- # '[' -z 810f4fb5-0a2e-4b03-9621-0f6ecb14873b ']' 00:17:33.209 05:36:37 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:33.467 [2024-10-07 05:36:37.434705] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:33.467 [2024-10-07 05:36:37.435064] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:33.467 [2024-10-07 05:36:37.435284] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.467 [2024-10-07 05:36:37.435539] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.467 [2024-10-07 05:36:37.435668] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:17:33.724 05:36:37 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.725 05:36:37 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:33.983 05:36:37 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:33.983 05:36:37 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:33.983 05:36:37 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:33.983 05:36:37 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:33.983 05:36:37 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:33.983 05:36:37 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:34.260 05:36:38 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:34.260 05:36:38 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:34.529 05:36:38 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:34.529 05:36:38 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:34.787 05:36:38 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:34.787 05:36:38 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:34.787 05:36:38 -- common/autotest_common.sh@640 -- # local es=0 00:17:34.787 05:36:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:34.787 05:36:38 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.787 05:36:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:34.787 05:36:38 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.787 05:36:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:34.787 05:36:38 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.787 05:36:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:34.787 05:36:38 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.787 05:36:38 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:34.787 05:36:38 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:35.045 [2024-10-07 05:36:38.998955] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:35.045 [2024-10-07 05:36:39.000914] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:35.045 [2024-10-07 05:36:39.001132] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:35.045 [2024-10-07 05:36:39.001312] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:35.045 [2024-10-07 05:36:39.001543] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:35.045 [2024-10-07 05:36:39.001700] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:35.045 [2024-10-07 05:36:39.001854] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:35.045 [2024-10-07 05:36:39.001955] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring 00:17:35.045 request: 00:17:35.045 { 00:17:35.045 "name": "raid_bdev1", 00:17:35.045 "raid_level": "raid1", 00:17:35.045 "base_bdevs": [ 00:17:35.045 "malloc1", 00:17:35.045 "malloc2", 00:17:35.045 "malloc3" 00:17:35.045 ], 00:17:35.045 "superblock": false, 00:17:35.045 "method": "bdev_raid_create", 00:17:35.045 "req_id": 1 00:17:35.045 } 00:17:35.045 Got JSON-RPC error response 00:17:35.045 response: 00:17:35.045 { 00:17:35.045 "code": -17, 00:17:35.045 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:35.045 } 00:17:35.045 05:36:39 -- common/autotest_common.sh@643 -- # es=1 00:17:35.045 05:36:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:35.045 05:36:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:35.045 05:36:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:35.045 05:36:39 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.045 05:36:39 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:35.303 05:36:39 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:35.303 05:36:39 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:35.303 05:36:39 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:35.561 [2024-10-07 05:36:39.379109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:35.561 [2024-10-07 05:36:39.379328] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.561 [2024-10-07 05:36:39.379412] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:35.561 [2024-10-07 05:36:39.379558] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.561 [2024-10-07 05:36:39.381739] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.561 [2024-10-07 05:36:39.381899] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:35.561 [2024-10-07 05:36:39.382147] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:35.561 [2024-10-07 05:36:39.382316] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:35.561 pt1 00:17:35.561 05:36:39 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:35.561 05:36:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:35.561 05:36:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:35.561 05:36:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:35.561 05:36:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:35.561 05:36:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:35.561 05:36:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:35.561 05:36:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:35.561 05:36:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:35.561 05:36:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:35.561 05:36:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.561 05:36:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.820 05:36:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:35.820 "name": "raid_bdev1", 00:17:35.820 "uuid": "810f4fb5-0a2e-4b03-9621-0f6ecb14873b", 00:17:35.820 "strip_size_kb": 0, 00:17:35.820 "state": "configuring", 00:17:35.820 "raid_level": "raid1", 00:17:35.820 "superblock": true, 00:17:35.820 "num_base_bdevs": 3, 00:17:35.820 "num_base_bdevs_discovered": 1, 00:17:35.820 "num_base_bdevs_operational": 3, 00:17:35.820 "base_bdevs_list": [ 00:17:35.820 { 00:17:35.820 "name": "pt1", 00:17:35.820 "uuid": "620bacaa-8ba6-5a24-af71-9c8567b4a004", 00:17:35.820 "is_configured": true, 00:17:35.820 "data_offset": 2048, 00:17:35.820 "data_size": 63488 00:17:35.820 }, 00:17:35.820 { 00:17:35.820 "name": null, 00:17:35.820 "uuid": "c7e90ece-4425-5bb3-9cb1-e3a0a352e959", 00:17:35.820 "is_configured": false, 00:17:35.820 "data_offset": 2048, 00:17:35.820 "data_size": 63488 00:17:35.820 }, 00:17:35.820 { 00:17:35.820 "name": null, 00:17:35.820 "uuid": "7bdaf9f2-1329-5c64-893c-0c773c9b8b37", 00:17:35.820 "is_configured": false, 00:17:35.820 "data_offset": 2048, 00:17:35.820 "data_size": 63488 00:17:35.820 } 00:17:35.820 ] 00:17:35.820 }' 00:17:35.820 05:36:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:35.820 05:36:39 -- common/autotest_common.sh@10 -- # set +x 00:17:36.386 05:36:40 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:17:36.386 05:36:40 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:36.645 pt2 00:17:36.645 [2024-10-07 05:36:40.463305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:36.645 [2024-10-07 05:36:40.463384] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.645 [2024-10-07 05:36:40.463431] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:36.645 [2024-10-07 05:36:40.463453] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.645 [2024-10-07 05:36:40.463972] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.645 [2024-10-07 05:36:40.464003] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:36.645 [2024-10-07 05:36:40.464126] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:36.645 [2024-10-07 05:36:40.464167] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:36.645 05:36:40 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:36.904 [2024-10-07 05:36:40.663416] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:36.904 05:36:40 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:36.904 05:36:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:36.904 05:36:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:36.904 05:36:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:36.904 05:36:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:36.904 05:36:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:36.904 05:36:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:36.904 05:36:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:36.904 05:36:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:36.904 05:36:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:36.904 05:36:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.904 05:36:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.904 05:36:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:36.904 "name": "raid_bdev1", 00:17:36.904 "uuid": "810f4fb5-0a2e-4b03-9621-0f6ecb14873b", 00:17:36.904 "strip_size_kb": 0, 00:17:36.904 "state": "configuring", 00:17:36.904 "raid_level": "raid1", 00:17:36.904 "superblock": true, 00:17:36.904 "num_base_bdevs": 3, 00:17:36.904 "num_base_bdevs_discovered": 1, 00:17:36.904 "num_base_bdevs_operational": 3, 00:17:36.904 "base_bdevs_list": [ 00:17:36.904 { 00:17:36.904 "name": "pt1", 00:17:36.904 "uuid": "620bacaa-8ba6-5a24-af71-9c8567b4a004", 00:17:36.904 "is_configured": true, 00:17:36.904 "data_offset": 2048, 00:17:36.904 "data_size": 63488 00:17:36.904 }, 00:17:36.904 { 00:17:36.904 "name": null, 00:17:36.904 "uuid": "c7e90ece-4425-5bb3-9cb1-e3a0a352e959", 00:17:36.904 "is_configured": false, 00:17:36.904 "data_offset": 2048, 00:17:36.904 "data_size": 63488 00:17:36.904 }, 00:17:36.904 { 00:17:36.904 "name": null, 00:17:36.904 "uuid": "7bdaf9f2-1329-5c64-893c-0c773c9b8b37", 00:17:36.904 "is_configured": false, 00:17:36.904 "data_offset": 2048, 00:17:36.904 "data_size": 63488 00:17:36.904 } 00:17:36.904 ] 00:17:36.904 }' 00:17:36.904 05:36:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:36.904 05:36:40 -- common/autotest_common.sh@10 -- # set +x 00:17:37.471 05:36:41 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:37.471 05:36:41 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:37.471 05:36:41 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:37.729 [2024-10-07 05:36:41.683619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:37.729 [2024-10-07 05:36:41.683728] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.729 [2024-10-07 05:36:41.683778] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:37.729 [2024-10-07 05:36:41.683810] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.729 [2024-10-07 05:36:41.684396] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.729 [2024-10-07 05:36:41.684445] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:37.729 [2024-10-07 05:36:41.684572] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:37.729 [2024-10-07 05:36:41.684600] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:37.729 pt2 00:17:37.729 05:36:41 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:37.729 05:36:41 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:37.729 05:36:41 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:37.987 [2024-10-07 05:36:41.947648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:37.987 [2024-10-07 05:36:41.947739] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.988 [2024-10-07 05:36:41.947782] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:37.988 [2024-10-07 05:36:41.947813] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.988 [2024-10-07 05:36:41.948316] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.988 [2024-10-07 05:36:41.948365] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:37.988 [2024-10-07 05:36:41.948495] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:37.988 [2024-10-07 05:36:41.948524] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:37.988 [2024-10-07 05:36:41.948670] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:17:37.988 [2024-10-07 05:36:41.948696] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:37.988 [2024-10-07 05:36:41.948791] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:37.988 [2024-10-07 05:36:41.949124] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:17:37.988 [2024-10-07 05:36:41.949145] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:17:37.988 [2024-10-07 05:36:41.949277] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.988 pt3 00:17:38.246 05:36:41 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:38.246 05:36:41 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:38.246 05:36:41 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:38.246 05:36:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:38.246 05:36:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:38.246 05:36:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:38.246 05:36:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:38.246 05:36:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:38.246 05:36:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:38.246 05:36:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:38.246 05:36:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:38.246 05:36:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:38.246 05:36:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.246 05:36:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.246 05:36:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:38.246 "name": "raid_bdev1", 00:17:38.246 "uuid": "810f4fb5-0a2e-4b03-9621-0f6ecb14873b", 00:17:38.246 "strip_size_kb": 0, 00:17:38.246 "state": "online", 00:17:38.246 "raid_level": "raid1", 00:17:38.246 "superblock": true, 00:17:38.246 "num_base_bdevs": 3, 00:17:38.246 "num_base_bdevs_discovered": 3, 00:17:38.246 "num_base_bdevs_operational": 3, 00:17:38.246 "base_bdevs_list": [ 00:17:38.246 { 00:17:38.246 "name": "pt1", 00:17:38.246 "uuid": "620bacaa-8ba6-5a24-af71-9c8567b4a004", 00:17:38.246 "is_configured": true, 00:17:38.246 "data_offset": 2048, 00:17:38.246 "data_size": 63488 00:17:38.246 }, 00:17:38.246 { 00:17:38.246 "name": "pt2", 00:17:38.246 "uuid": "c7e90ece-4425-5bb3-9cb1-e3a0a352e959", 00:17:38.246 "is_configured": true, 00:17:38.246 "data_offset": 2048, 00:17:38.246 "data_size": 63488 00:17:38.246 }, 00:17:38.246 { 00:17:38.246 "name": "pt3", 00:17:38.246 "uuid": "7bdaf9f2-1329-5c64-893c-0c773c9b8b37", 00:17:38.246 "is_configured": true, 00:17:38.246 "data_offset": 2048, 00:17:38.246 "data_size": 63488 00:17:38.246 } 00:17:38.246 ] 00:17:38.246 }' 00:17:38.246 05:36:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:38.246 05:36:42 -- common/autotest_common.sh@10 -- # set +x 00:17:38.815 05:36:42 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:38.815 05:36:42 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:39.073 [2024-10-07 05:36:42.964133] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:39.073 05:36:42 -- bdev/bdev_raid.sh@430 -- # '[' 810f4fb5-0a2e-4b03-9621-0f6ecb14873b '!=' 810f4fb5-0a2e-4b03-9621-0f6ecb14873b ']' 00:17:39.073 05:36:42 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:17:39.073 05:36:42 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:39.073 05:36:42 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:39.073 05:36:42 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:39.332 [2024-10-07 05:36:43.231908] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:39.332 05:36:43 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:39.332 05:36:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:39.332 05:36:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:39.332 05:36:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:39.332 05:36:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:39.332 05:36:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:39.332 05:36:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:39.332 05:36:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:39.332 05:36:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:39.332 05:36:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:39.332 05:36:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.332 05:36:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.589 05:36:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:39.589 "name": "raid_bdev1", 00:17:39.589 "uuid": "810f4fb5-0a2e-4b03-9621-0f6ecb14873b", 00:17:39.589 "strip_size_kb": 0, 00:17:39.589 "state": "online", 00:17:39.589 "raid_level": "raid1", 00:17:39.589 "superblock": true, 00:17:39.589 "num_base_bdevs": 3, 00:17:39.589 "num_base_bdevs_discovered": 2, 00:17:39.589 "num_base_bdevs_operational": 2, 00:17:39.589 "base_bdevs_list": [ 00:17:39.589 { 00:17:39.589 "name": null, 00:17:39.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.589 "is_configured": false, 00:17:39.589 "data_offset": 2048, 00:17:39.589 "data_size": 63488 00:17:39.589 }, 00:17:39.589 { 00:17:39.589 "name": "pt2", 00:17:39.589 "uuid": "c7e90ece-4425-5bb3-9cb1-e3a0a352e959", 00:17:39.589 "is_configured": true, 00:17:39.590 "data_offset": 2048, 00:17:39.590 "data_size": 63488 00:17:39.590 }, 00:17:39.590 { 00:17:39.590 "name": "pt3", 00:17:39.590 "uuid": "7bdaf9f2-1329-5c64-893c-0c773c9b8b37", 00:17:39.590 "is_configured": true, 00:17:39.590 "data_offset": 2048, 00:17:39.590 "data_size": 63488 00:17:39.590 } 00:17:39.590 ] 00:17:39.590 }' 00:17:39.590 05:36:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:39.590 05:36:43 -- common/autotest_common.sh@10 -- # set +x 00:17:40.156 05:36:44 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:40.415 [2024-10-07 05:36:44.356145] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:40.415 [2024-10-07 05:36:44.356176] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:40.415 [2024-10-07 05:36:44.356247] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:40.415 [2024-10-07 05:36:44.356324] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:40.415 [2024-10-07 05:36:44.356337] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:17:40.415 05:36:44 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.415 05:36:44 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:17:40.674 05:36:44 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:17:40.674 05:36:44 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:17:40.674 05:36:44 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:17:40.674 05:36:44 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:40.674 05:36:44 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:41.268 05:36:44 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:17:41.268 05:36:44 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:41.268 05:36:44 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:41.268 05:36:45 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:17:41.268 05:36:45 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:41.268 05:36:45 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:17:41.268 05:36:45 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:17:41.268 05:36:45 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:41.525 [2024-10-07 05:36:45.438893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:41.525 [2024-10-07 05:36:45.438997] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.525 [2024-10-07 05:36:45.439038] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:41.525 [2024-10-07 05:36:45.439065] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.525 [2024-10-07 05:36:45.441606] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.525 [2024-10-07 05:36:45.441656] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:41.525 [2024-10-07 05:36:45.441783] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:41.525 [2024-10-07 05:36:45.441834] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:41.525 pt2 00:17:41.525 05:36:45 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:41.525 05:36:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:41.525 05:36:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:41.525 05:36:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:41.525 05:36:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:41.525 05:36:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:41.525 05:36:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:41.525 05:36:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:41.525 05:36:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:41.525 05:36:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:41.525 05:36:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.525 05:36:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.783 05:36:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:41.783 "name": "raid_bdev1", 00:17:41.783 "uuid": "810f4fb5-0a2e-4b03-9621-0f6ecb14873b", 00:17:41.783 "strip_size_kb": 0, 00:17:41.783 "state": "configuring", 00:17:41.783 "raid_level": "raid1", 00:17:41.783 "superblock": true, 00:17:41.783 "num_base_bdevs": 3, 00:17:41.783 "num_base_bdevs_discovered": 1, 00:17:41.783 "num_base_bdevs_operational": 2, 00:17:41.783 "base_bdevs_list": [ 00:17:41.783 { 00:17:41.783 "name": null, 00:17:41.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.783 "is_configured": false, 00:17:41.783 "data_offset": 2048, 00:17:41.783 "data_size": 63488 00:17:41.783 }, 00:17:41.783 { 00:17:41.783 "name": "pt2", 00:17:41.783 "uuid": "c7e90ece-4425-5bb3-9cb1-e3a0a352e959", 00:17:41.783 "is_configured": true, 00:17:41.783 "data_offset": 2048, 00:17:41.783 "data_size": 63488 00:17:41.783 }, 00:17:41.783 { 00:17:41.783 "name": null, 00:17:41.783 "uuid": "7bdaf9f2-1329-5c64-893c-0c773c9b8b37", 00:17:41.783 "is_configured": false, 00:17:41.783 "data_offset": 2048, 00:17:41.783 "data_size": 63488 00:17:41.783 } 00:17:41.783 ] 00:17:41.783 }' 00:17:41.783 05:36:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:41.783 05:36:45 -- common/autotest_common.sh@10 -- # set +x 00:17:42.347 05:36:46 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:17:42.348 05:36:46 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:17:42.348 05:36:46 -- bdev/bdev_raid.sh@462 -- # i=2 00:17:42.348 05:36:46 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:42.605 [2024-10-07 05:36:46.489824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:42.605 [2024-10-07 05:36:46.489940] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.605 [2024-10-07 05:36:46.489989] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:42.605 [2024-10-07 05:36:46.490016] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.605 [2024-10-07 05:36:46.490867] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.605 [2024-10-07 05:36:46.490928] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:42.605 [2024-10-07 05:36:46.491049] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:42.605 [2024-10-07 05:36:46.491077] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:42.605 [2024-10-07 05:36:46.491499] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:17:42.605 [2024-10-07 05:36:46.491521] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:42.605 [2024-10-07 05:36:46.491650] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:42.605 [2024-10-07 05:36:46.492297] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:17:42.605 [2024-10-07 05:36:46.492321] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:17:42.605 [2024-10-07 05:36:46.492478] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.605 pt3 00:17:42.605 05:36:46 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:42.605 05:36:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:42.605 05:36:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:42.605 05:36:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:42.605 05:36:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:42.605 05:36:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:42.605 05:36:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:42.605 05:36:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:42.605 05:36:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:42.605 05:36:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:42.605 05:36:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.605 05:36:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.864 05:36:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:42.864 "name": "raid_bdev1", 00:17:42.864 "uuid": "810f4fb5-0a2e-4b03-9621-0f6ecb14873b", 00:17:42.864 "strip_size_kb": 0, 00:17:42.864 "state": "online", 00:17:42.864 "raid_level": "raid1", 00:17:42.864 "superblock": true, 00:17:42.864 "num_base_bdevs": 3, 00:17:42.864 "num_base_bdevs_discovered": 2, 00:17:42.864 "num_base_bdevs_operational": 2, 00:17:42.864 "base_bdevs_list": [ 00:17:42.864 { 00:17:42.864 "name": null, 00:17:42.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.864 "is_configured": false, 00:17:42.864 "data_offset": 2048, 00:17:42.864 "data_size": 63488 00:17:42.864 }, 00:17:42.864 { 00:17:42.864 "name": "pt2", 00:17:42.864 "uuid": "c7e90ece-4425-5bb3-9cb1-e3a0a352e959", 00:17:42.864 "is_configured": true, 00:17:42.864 "data_offset": 2048, 00:17:42.864 "data_size": 63488 00:17:42.864 }, 00:17:42.864 { 00:17:42.864 "name": "pt3", 00:17:42.864 "uuid": "7bdaf9f2-1329-5c64-893c-0c773c9b8b37", 00:17:42.864 "is_configured": true, 00:17:42.864 "data_offset": 2048, 00:17:42.864 "data_size": 63488 00:17:42.864 } 00:17:42.864 ] 00:17:42.864 }' 00:17:42.864 05:36:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:42.864 05:36:46 -- common/autotest_common.sh@10 -- # set +x 00:17:43.432 05:36:47 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:17:43.432 05:36:47 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:43.690 [2024-10-07 05:36:47.488636] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:43.690 [2024-10-07 05:36:47.488669] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:43.690 [2024-10-07 05:36:47.488741] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.690 [2024-10-07 05:36:47.488808] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:43.690 [2024-10-07 05:36:47.488820] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:17:43.690 05:36:47 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.690 05:36:47 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:17:43.948 05:36:47 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:17:43.948 05:36:47 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:17:43.948 05:36:47 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:44.206 [2024-10-07 05:36:47.964707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:44.206 [2024-10-07 05:36:47.964774] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.206 [2024-10-07 05:36:47.964814] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:44.206 [2024-10-07 05:36:47.964842] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.206 [2024-10-07 05:36:47.967175] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.206 [2024-10-07 05:36:47.967220] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:44.206 [2024-10-07 05:36:47.967328] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:44.206 [2024-10-07 05:36:47.967374] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:44.206 pt1 00:17:44.206 05:36:47 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:44.206 05:36:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:44.206 05:36:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:44.206 05:36:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:44.206 05:36:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:44.206 05:36:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:44.206 05:36:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:44.206 05:36:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:44.206 05:36:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:44.206 05:36:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:44.206 05:36:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.206 05:36:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.469 05:36:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:44.469 "name": "raid_bdev1", 00:17:44.469 "uuid": "810f4fb5-0a2e-4b03-9621-0f6ecb14873b", 00:17:44.469 "strip_size_kb": 0, 00:17:44.469 "state": "configuring", 00:17:44.469 "raid_level": "raid1", 00:17:44.469 "superblock": true, 00:17:44.469 "num_base_bdevs": 3, 00:17:44.469 "num_base_bdevs_discovered": 1, 00:17:44.469 "num_base_bdevs_operational": 3, 00:17:44.469 "base_bdevs_list": [ 00:17:44.469 { 00:17:44.469 "name": "pt1", 00:17:44.469 "uuid": "620bacaa-8ba6-5a24-af71-9c8567b4a004", 00:17:44.469 "is_configured": true, 00:17:44.469 "data_offset": 2048, 00:17:44.469 "data_size": 63488 00:17:44.469 }, 00:17:44.469 { 00:17:44.470 "name": null, 00:17:44.470 "uuid": "c7e90ece-4425-5bb3-9cb1-e3a0a352e959", 00:17:44.470 "is_configured": false, 00:17:44.470 "data_offset": 2048, 00:17:44.470 "data_size": 63488 00:17:44.470 }, 00:17:44.470 { 00:17:44.470 "name": null, 00:17:44.470 "uuid": "7bdaf9f2-1329-5c64-893c-0c773c9b8b37", 00:17:44.470 "is_configured": false, 00:17:44.470 "data_offset": 2048, 00:17:44.470 "data_size": 63488 00:17:44.470 } 00:17:44.470 ] 00:17:44.470 }' 00:17:44.470 05:36:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:44.470 05:36:48 -- common/autotest_common.sh@10 -- # set +x 00:17:45.037 05:36:48 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:17:45.037 05:36:48 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:45.037 05:36:48 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:45.037 05:36:48 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:17:45.037 05:36:48 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:45.037 05:36:48 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:45.296 05:36:49 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:17:45.296 05:36:49 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:45.296 05:36:49 -- bdev/bdev_raid.sh@489 -- # i=2 00:17:45.296 05:36:49 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:45.554 [2024-10-07 05:36:49.404948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:45.554 [2024-10-07 05:36:49.405052] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.554 [2024-10-07 05:36:49.405084] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:45.554 [2024-10-07 05:36:49.405111] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.554 [2024-10-07 05:36:49.405609] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.554 [2024-10-07 05:36:49.405646] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:45.554 [2024-10-07 05:36:49.405750] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:45.554 [2024-10-07 05:36:49.405765] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:45.554 [2024-10-07 05:36:49.405772] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:45.554 [2024-10-07 05:36:49.405801] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b780 name raid_bdev1, state configuring 00:17:45.554 [2024-10-07 05:36:49.405864] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:45.554 pt3 00:17:45.554 05:36:49 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:45.554 05:36:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:45.554 05:36:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:45.554 05:36:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:45.554 05:36:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:45.554 05:36:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:45.554 05:36:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:45.554 05:36:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:45.554 05:36:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:45.554 05:36:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:45.554 05:36:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.554 05:36:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.814 05:36:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:45.814 "name": "raid_bdev1", 00:17:45.814 "uuid": "810f4fb5-0a2e-4b03-9621-0f6ecb14873b", 00:17:45.814 "strip_size_kb": 0, 00:17:45.814 "state": "configuring", 00:17:45.814 "raid_level": "raid1", 00:17:45.814 "superblock": true, 00:17:45.814 "num_base_bdevs": 3, 00:17:45.814 "num_base_bdevs_discovered": 1, 00:17:45.814 "num_base_bdevs_operational": 2, 00:17:45.814 "base_bdevs_list": [ 00:17:45.814 { 00:17:45.814 "name": null, 00:17:45.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.814 "is_configured": false, 00:17:45.814 "data_offset": 2048, 00:17:45.814 "data_size": 63488 00:17:45.814 }, 00:17:45.814 { 00:17:45.814 "name": null, 00:17:45.814 "uuid": "c7e90ece-4425-5bb3-9cb1-e3a0a352e959", 00:17:45.814 "is_configured": false, 00:17:45.814 "data_offset": 2048, 00:17:45.814 "data_size": 63488 00:17:45.814 }, 00:17:45.814 { 00:17:45.814 "name": "pt3", 00:17:45.814 "uuid": "7bdaf9f2-1329-5c64-893c-0c773c9b8b37", 00:17:45.814 "is_configured": true, 00:17:45.814 "data_offset": 2048, 00:17:45.814 "data_size": 63488 00:17:45.814 } 00:17:45.814 ] 00:17:45.814 }' 00:17:45.814 05:36:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:45.814 05:36:49 -- common/autotest_common.sh@10 -- # set +x 00:17:46.382 05:36:50 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:17:46.382 05:36:50 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:17:46.382 05:36:50 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:46.642 [2024-10-07 05:36:50.513157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:46.642 [2024-10-07 05:36:50.513252] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.642 [2024-10-07 05:36:50.513287] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:17:46.642 [2024-10-07 05:36:50.513314] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.642 [2024-10-07 05:36:50.513829] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.642 [2024-10-07 05:36:50.513877] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:46.642 [2024-10-07 05:36:50.513999] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:46.642 [2024-10-07 05:36:50.514053] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:46.642 [2024-10-07 05:36:50.514182] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:17:46.642 [2024-10-07 05:36:50.514209] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:46.642 [2024-10-07 05:36:50.514321] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:17:46.642 [2024-10-07 05:36:50.514679] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:17:46.642 [2024-10-07 05:36:50.514702] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:17:46.642 [2024-10-07 05:36:50.514855] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.642 pt2 00:17:46.642 05:36:50 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:17:46.642 05:36:50 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:17:46.642 05:36:50 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:46.642 05:36:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:46.642 05:36:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:46.642 05:36:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:46.642 05:36:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:46.642 05:36:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:46.642 05:36:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:46.642 05:36:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:46.642 05:36:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:46.642 05:36:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:46.642 05:36:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.642 05:36:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.901 05:36:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:46.901 "name": "raid_bdev1", 00:17:46.901 "uuid": "810f4fb5-0a2e-4b03-9621-0f6ecb14873b", 00:17:46.901 "strip_size_kb": 0, 00:17:46.901 "state": "online", 00:17:46.901 "raid_level": "raid1", 00:17:46.901 "superblock": true, 00:17:46.901 "num_base_bdevs": 3, 00:17:46.901 "num_base_bdevs_discovered": 2, 00:17:46.901 "num_base_bdevs_operational": 2, 00:17:46.901 "base_bdevs_list": [ 00:17:46.901 { 00:17:46.901 "name": null, 00:17:46.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.901 "is_configured": false, 00:17:46.901 "data_offset": 2048, 00:17:46.901 "data_size": 63488 00:17:46.901 }, 00:17:46.901 { 00:17:46.901 "name": "pt2", 00:17:46.901 "uuid": "c7e90ece-4425-5bb3-9cb1-e3a0a352e959", 00:17:46.901 "is_configured": true, 00:17:46.901 "data_offset": 2048, 00:17:46.901 "data_size": 63488 00:17:46.901 }, 00:17:46.901 { 00:17:46.901 "name": "pt3", 00:17:46.901 "uuid": "7bdaf9f2-1329-5c64-893c-0c773c9b8b37", 00:17:46.901 "is_configured": true, 00:17:46.901 "data_offset": 2048, 00:17:46.901 "data_size": 63488 00:17:46.901 } 00:17:46.901 ] 00:17:46.901 }' 00:17:46.901 05:36:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:46.901 05:36:50 -- common/autotest_common.sh@10 -- # set +x 00:17:47.468 05:36:51 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:47.468 05:36:51 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:17:47.726 [2024-10-07 05:36:51.531339] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:47.726 05:36:51 -- bdev/bdev_raid.sh@506 -- # '[' 810f4fb5-0a2e-4b03-9621-0f6ecb14873b '!=' 810f4fb5-0a2e-4b03-9621-0f6ecb14873b ']' 00:17:47.726 05:36:51 -- bdev/bdev_raid.sh@511 -- # killprocess 148068 00:17:47.726 05:36:51 -- common/autotest_common.sh@926 -- # '[' -z 148068 ']' 00:17:47.726 05:36:51 -- common/autotest_common.sh@930 -- # kill -0 148068 00:17:47.726 05:36:51 -- common/autotest_common.sh@931 -- # uname 00:17:47.726 05:36:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:47.726 05:36:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 148068 00:17:47.726 killing process with pid 148068 00:17:47.726 05:36:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:47.726 05:36:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:47.726 05:36:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 148068' 00:17:47.726 05:36:51 -- common/autotest_common.sh@945 -- # kill 148068 00:17:47.726 [2024-10-07 05:36:51.568580] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:47.726 05:36:51 -- common/autotest_common.sh@950 -- # wait 148068 00:17:47.726 [2024-10-07 05:36:51.568676] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:47.726 [2024-10-07 05:36:51.568743] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:47.726 [2024-10-07 05:36:51.568770] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:17:47.984 [2024-10-07 05:36:51.778807] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:48.920 ************************************ 00:17:48.920 END TEST raid_superblock_test 00:17:48.920 ************************************ 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:48.920 00:17:48.920 real 0m19.353s 00:17:48.920 user 0m35.335s 00:17:48.920 sys 0m2.268s 00:17:48.920 05:36:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:48.920 05:36:52 -- common/autotest_common.sh@10 -- # set +x 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:17:48.920 05:36:52 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:48.920 05:36:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:48.920 05:36:52 -- common/autotest_common.sh@10 -- # set +x 00:17:48.920 ************************************ 00:17:48.920 START TEST raid_state_function_test 00:17:48.920 ************************************ 00:17:48.920 05:36:52 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 false 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@226 -- # raid_pid=149371 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 149371' 00:17:48.920 Process raid pid: 149371 00:17:48.920 05:36:52 -- bdev/bdev_raid.sh@228 -- # waitforlisten 149371 /var/tmp/spdk-raid.sock 00:17:48.920 05:36:52 -- common/autotest_common.sh@819 -- # '[' -z 149371 ']' 00:17:48.920 05:36:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:48.920 05:36:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:48.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:48.920 05:36:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:48.920 05:36:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:48.920 05:36:52 -- common/autotest_common.sh@10 -- # set +x 00:17:48.920 [2024-10-07 05:36:52.849798] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:48.920 [2024-10-07 05:36:52.849982] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.180 [2024-10-07 05:36:53.002792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.440 [2024-10-07 05:36:53.271150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.700 [2024-10-07 05:36:53.473086] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:49.958 05:36:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:49.958 05:36:53 -- common/autotest_common.sh@852 -- # return 0 00:17:49.958 05:36:53 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:50.216 [2024-10-07 05:36:54.068817] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:50.216 [2024-10-07 05:36:54.068902] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:50.216 [2024-10-07 05:36:54.068915] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:50.216 [2024-10-07 05:36:54.068938] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:50.216 [2024-10-07 05:36:54.068946] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:50.216 [2024-10-07 05:36:54.068984] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:50.216 [2024-10-07 05:36:54.068993] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:50.216 [2024-10-07 05:36:54.069017] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:50.216 05:36:54 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:50.216 05:36:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:50.216 05:36:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:50.216 05:36:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:50.216 05:36:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:50.216 05:36:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:50.216 05:36:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:50.216 05:36:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:50.216 05:36:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:50.216 05:36:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:50.216 05:36:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.216 05:36:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.474 05:36:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:50.474 "name": "Existed_Raid", 00:17:50.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.474 "strip_size_kb": 64, 00:17:50.474 "state": "configuring", 00:17:50.474 "raid_level": "raid0", 00:17:50.474 "superblock": false, 00:17:50.474 "num_base_bdevs": 4, 00:17:50.474 "num_base_bdevs_discovered": 0, 00:17:50.474 "num_base_bdevs_operational": 4, 00:17:50.474 "base_bdevs_list": [ 00:17:50.474 { 00:17:50.474 "name": "BaseBdev1", 00:17:50.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.474 "is_configured": false, 00:17:50.474 "data_offset": 0, 00:17:50.474 "data_size": 0 00:17:50.474 }, 00:17:50.474 { 00:17:50.474 "name": "BaseBdev2", 00:17:50.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.474 "is_configured": false, 00:17:50.474 "data_offset": 0, 00:17:50.474 "data_size": 0 00:17:50.474 }, 00:17:50.474 { 00:17:50.474 "name": "BaseBdev3", 00:17:50.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.474 "is_configured": false, 00:17:50.474 "data_offset": 0, 00:17:50.474 "data_size": 0 00:17:50.474 }, 00:17:50.474 { 00:17:50.474 "name": "BaseBdev4", 00:17:50.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.474 "is_configured": false, 00:17:50.474 "data_offset": 0, 00:17:50.474 "data_size": 0 00:17:50.474 } 00:17:50.474 ] 00:17:50.474 }' 00:17:50.474 05:36:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:50.474 05:36:54 -- common/autotest_common.sh@10 -- # set +x 00:17:51.096 05:36:55 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:51.354 [2024-10-07 05:36:55.177674] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:51.354 [2024-10-07 05:36:55.177709] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:17:51.354 05:36:55 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:51.613 [2024-10-07 05:36:55.429767] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:51.613 [2024-10-07 05:36:55.429837] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:51.613 [2024-10-07 05:36:55.429849] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:51.613 [2024-10-07 05:36:55.429885] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:51.613 [2024-10-07 05:36:55.429894] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:51.613 [2024-10-07 05:36:55.429940] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:51.613 [2024-10-07 05:36:55.429949] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:51.613 [2024-10-07 05:36:55.429983] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:51.614 05:36:55 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:51.873 [2024-10-07 05:36:55.708195] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:51.873 BaseBdev1 00:17:51.873 05:36:55 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:51.873 05:36:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:51.873 05:36:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:51.873 05:36:55 -- common/autotest_common.sh@889 -- # local i 00:17:51.873 05:36:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:51.873 05:36:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:51.873 05:36:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:52.134 05:36:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:52.134 [ 00:17:52.134 { 00:17:52.134 "name": "BaseBdev1", 00:17:52.134 "aliases": [ 00:17:52.134 "bb343c52-735f-4355-a267-d45e68e9806f" 00:17:52.134 ], 00:17:52.134 "product_name": "Malloc disk", 00:17:52.134 "block_size": 512, 00:17:52.134 "num_blocks": 65536, 00:17:52.134 "uuid": "bb343c52-735f-4355-a267-d45e68e9806f", 00:17:52.134 "assigned_rate_limits": { 00:17:52.134 "rw_ios_per_sec": 0, 00:17:52.134 "rw_mbytes_per_sec": 0, 00:17:52.134 "r_mbytes_per_sec": 0, 00:17:52.134 "w_mbytes_per_sec": 0 00:17:52.134 }, 00:17:52.134 "claimed": true, 00:17:52.134 "claim_type": "exclusive_write", 00:17:52.134 "zoned": false, 00:17:52.134 "supported_io_types": { 00:17:52.134 "read": true, 00:17:52.134 "write": true, 00:17:52.134 "unmap": true, 00:17:52.134 "write_zeroes": true, 00:17:52.134 "flush": true, 00:17:52.134 "reset": true, 00:17:52.134 "compare": false, 00:17:52.134 "compare_and_write": false, 00:17:52.134 "abort": true, 00:17:52.134 "nvme_admin": false, 00:17:52.134 "nvme_io": false 00:17:52.134 }, 00:17:52.134 "memory_domains": [ 00:17:52.134 { 00:17:52.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.134 "dma_device_type": 2 00:17:52.134 } 00:17:52.134 ], 00:17:52.134 "driver_specific": {} 00:17:52.134 } 00:17:52.134 ] 00:17:52.134 05:36:56 -- common/autotest_common.sh@895 -- # return 0 00:17:52.134 05:36:56 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:52.134 05:36:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:52.134 05:36:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:52.134 05:36:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:52.134 05:36:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:52.134 05:36:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:52.134 05:36:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:52.134 05:36:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:52.134 05:36:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:52.393 05:36:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:52.393 05:36:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.393 05:36:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.393 05:36:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:52.393 "name": "Existed_Raid", 00:17:52.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.393 "strip_size_kb": 64, 00:17:52.393 "state": "configuring", 00:17:52.393 "raid_level": "raid0", 00:17:52.393 "superblock": false, 00:17:52.393 "num_base_bdevs": 4, 00:17:52.393 "num_base_bdevs_discovered": 1, 00:17:52.393 "num_base_bdevs_operational": 4, 00:17:52.393 "base_bdevs_list": [ 00:17:52.393 { 00:17:52.393 "name": "BaseBdev1", 00:17:52.393 "uuid": "bb343c52-735f-4355-a267-d45e68e9806f", 00:17:52.393 "is_configured": true, 00:17:52.393 "data_offset": 0, 00:17:52.393 "data_size": 65536 00:17:52.393 }, 00:17:52.393 { 00:17:52.393 "name": "BaseBdev2", 00:17:52.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.393 "is_configured": false, 00:17:52.393 "data_offset": 0, 00:17:52.393 "data_size": 0 00:17:52.393 }, 00:17:52.393 { 00:17:52.393 "name": "BaseBdev3", 00:17:52.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.393 "is_configured": false, 00:17:52.393 "data_offset": 0, 00:17:52.393 "data_size": 0 00:17:52.393 }, 00:17:52.393 { 00:17:52.393 "name": "BaseBdev4", 00:17:52.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.393 "is_configured": false, 00:17:52.393 "data_offset": 0, 00:17:52.393 "data_size": 0 00:17:52.393 } 00:17:52.393 ] 00:17:52.393 }' 00:17:52.393 05:36:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:52.393 05:36:56 -- common/autotest_common.sh@10 -- # set +x 00:17:52.960 05:36:56 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:53.219 [2024-10-07 05:36:57.172587] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:53.219 [2024-10-07 05:36:57.172672] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:53.219 05:36:57 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:53.219 05:36:57 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:53.787 [2024-10-07 05:36:57.464809] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:53.787 [2024-10-07 05:36:57.467508] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:53.787 [2024-10-07 05:36:57.467609] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:53.787 [2024-10-07 05:36:57.467623] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:53.787 [2024-10-07 05:36:57.467651] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:53.787 [2024-10-07 05:36:57.467661] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:53.787 [2024-10-07 05:36:57.467680] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:53.787 05:36:57 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:53.787 05:36:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:53.787 05:36:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:53.787 05:36:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:53.787 05:36:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:53.787 05:36:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:53.787 05:36:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:53.787 05:36:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:53.787 05:36:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:53.787 05:36:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:53.787 05:36:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:53.787 05:36:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:53.787 05:36:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.787 05:36:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.787 05:36:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:53.787 "name": "Existed_Raid", 00:17:53.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.787 "strip_size_kb": 64, 00:17:53.787 "state": "configuring", 00:17:53.787 "raid_level": "raid0", 00:17:53.787 "superblock": false, 00:17:53.787 "num_base_bdevs": 4, 00:17:53.787 "num_base_bdevs_discovered": 1, 00:17:53.787 "num_base_bdevs_operational": 4, 00:17:53.787 "base_bdevs_list": [ 00:17:53.787 { 00:17:53.787 "name": "BaseBdev1", 00:17:53.787 "uuid": "bb343c52-735f-4355-a267-d45e68e9806f", 00:17:53.787 "is_configured": true, 00:17:53.787 "data_offset": 0, 00:17:53.787 "data_size": 65536 00:17:53.787 }, 00:17:53.787 { 00:17:53.787 "name": "BaseBdev2", 00:17:53.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.787 "is_configured": false, 00:17:53.787 "data_offset": 0, 00:17:53.787 "data_size": 0 00:17:53.787 }, 00:17:53.787 { 00:17:53.787 "name": "BaseBdev3", 00:17:53.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.787 "is_configured": false, 00:17:53.787 "data_offset": 0, 00:17:53.787 "data_size": 0 00:17:53.787 }, 00:17:53.787 { 00:17:53.787 "name": "BaseBdev4", 00:17:53.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.787 "is_configured": false, 00:17:53.787 "data_offset": 0, 00:17:53.787 "data_size": 0 00:17:53.787 } 00:17:53.788 ] 00:17:53.788 }' 00:17:53.788 05:36:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:53.788 05:36:57 -- common/autotest_common.sh@10 -- # set +x 00:17:54.355 05:36:58 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:54.612 [2024-10-07 05:36:58.552632] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:54.612 BaseBdev2 00:17:54.612 05:36:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:54.612 05:36:58 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:54.612 05:36:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:54.612 05:36:58 -- common/autotest_common.sh@889 -- # local i 00:17:54.612 05:36:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:54.612 05:36:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:54.612 05:36:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:54.870 05:36:58 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:55.128 [ 00:17:55.128 { 00:17:55.128 "name": "BaseBdev2", 00:17:55.128 "aliases": [ 00:17:55.128 "0db8d642-fc35-489e-a0e3-524d4f391dbe" 00:17:55.128 ], 00:17:55.128 "product_name": "Malloc disk", 00:17:55.128 "block_size": 512, 00:17:55.128 "num_blocks": 65536, 00:17:55.128 "uuid": "0db8d642-fc35-489e-a0e3-524d4f391dbe", 00:17:55.128 "assigned_rate_limits": { 00:17:55.129 "rw_ios_per_sec": 0, 00:17:55.129 "rw_mbytes_per_sec": 0, 00:17:55.129 "r_mbytes_per_sec": 0, 00:17:55.129 "w_mbytes_per_sec": 0 00:17:55.129 }, 00:17:55.129 "claimed": true, 00:17:55.129 "claim_type": "exclusive_write", 00:17:55.129 "zoned": false, 00:17:55.129 "supported_io_types": { 00:17:55.129 "read": true, 00:17:55.129 "write": true, 00:17:55.129 "unmap": true, 00:17:55.129 "write_zeroes": true, 00:17:55.129 "flush": true, 00:17:55.129 "reset": true, 00:17:55.129 "compare": false, 00:17:55.129 "compare_and_write": false, 00:17:55.129 "abort": true, 00:17:55.129 "nvme_admin": false, 00:17:55.129 "nvme_io": false 00:17:55.129 }, 00:17:55.129 "memory_domains": [ 00:17:55.129 { 00:17:55.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.129 "dma_device_type": 2 00:17:55.129 } 00:17:55.129 ], 00:17:55.129 "driver_specific": {} 00:17:55.129 } 00:17:55.129 ] 00:17:55.129 05:36:59 -- common/autotest_common.sh@895 -- # return 0 00:17:55.129 05:36:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:55.129 05:36:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:55.129 05:36:59 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:55.129 05:36:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:55.129 05:36:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:55.129 05:36:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:55.129 05:36:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:55.129 05:36:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:55.129 05:36:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:55.129 05:36:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:55.129 05:36:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:55.129 05:36:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:55.386 05:36:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.386 05:36:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.386 05:36:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:55.386 "name": "Existed_Raid", 00:17:55.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.386 "strip_size_kb": 64, 00:17:55.386 "state": "configuring", 00:17:55.386 "raid_level": "raid0", 00:17:55.386 "superblock": false, 00:17:55.386 "num_base_bdevs": 4, 00:17:55.386 "num_base_bdevs_discovered": 2, 00:17:55.386 "num_base_bdevs_operational": 4, 00:17:55.386 "base_bdevs_list": [ 00:17:55.386 { 00:17:55.386 "name": "BaseBdev1", 00:17:55.386 "uuid": "bb343c52-735f-4355-a267-d45e68e9806f", 00:17:55.386 "is_configured": true, 00:17:55.386 "data_offset": 0, 00:17:55.386 "data_size": 65536 00:17:55.386 }, 00:17:55.386 { 00:17:55.386 "name": "BaseBdev2", 00:17:55.386 "uuid": "0db8d642-fc35-489e-a0e3-524d4f391dbe", 00:17:55.386 "is_configured": true, 00:17:55.386 "data_offset": 0, 00:17:55.386 "data_size": 65536 00:17:55.386 }, 00:17:55.386 { 00:17:55.386 "name": "BaseBdev3", 00:17:55.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.386 "is_configured": false, 00:17:55.386 "data_offset": 0, 00:17:55.386 "data_size": 0 00:17:55.386 }, 00:17:55.386 { 00:17:55.386 "name": "BaseBdev4", 00:17:55.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.386 "is_configured": false, 00:17:55.386 "data_offset": 0, 00:17:55.386 "data_size": 0 00:17:55.386 } 00:17:55.386 ] 00:17:55.386 }' 00:17:55.386 05:36:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:55.386 05:36:59 -- common/autotest_common.sh@10 -- # set +x 00:17:56.321 05:36:59 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:56.321 [2024-10-07 05:37:00.180673] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:56.321 BaseBdev3 00:17:56.321 05:37:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:56.321 05:37:00 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:56.321 05:37:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:56.321 05:37:00 -- common/autotest_common.sh@889 -- # local i 00:17:56.321 05:37:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:56.321 05:37:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:56.321 05:37:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:56.580 05:37:00 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:56.837 [ 00:17:56.837 { 00:17:56.837 "name": "BaseBdev3", 00:17:56.837 "aliases": [ 00:17:56.837 "6f609147-1d0e-46fd-8170-e104c06b2fb7" 00:17:56.837 ], 00:17:56.837 "product_name": "Malloc disk", 00:17:56.837 "block_size": 512, 00:17:56.837 "num_blocks": 65536, 00:17:56.837 "uuid": "6f609147-1d0e-46fd-8170-e104c06b2fb7", 00:17:56.837 "assigned_rate_limits": { 00:17:56.837 "rw_ios_per_sec": 0, 00:17:56.837 "rw_mbytes_per_sec": 0, 00:17:56.837 "r_mbytes_per_sec": 0, 00:17:56.837 "w_mbytes_per_sec": 0 00:17:56.837 }, 00:17:56.837 "claimed": true, 00:17:56.837 "claim_type": "exclusive_write", 00:17:56.837 "zoned": false, 00:17:56.837 "supported_io_types": { 00:17:56.837 "read": true, 00:17:56.837 "write": true, 00:17:56.837 "unmap": true, 00:17:56.837 "write_zeroes": true, 00:17:56.837 "flush": true, 00:17:56.837 "reset": true, 00:17:56.837 "compare": false, 00:17:56.837 "compare_and_write": false, 00:17:56.838 "abort": true, 00:17:56.838 "nvme_admin": false, 00:17:56.838 "nvme_io": false 00:17:56.838 }, 00:17:56.838 "memory_domains": [ 00:17:56.838 { 00:17:56.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.838 "dma_device_type": 2 00:17:56.838 } 00:17:56.838 ], 00:17:56.838 "driver_specific": {} 00:17:56.838 } 00:17:56.838 ] 00:17:56.838 05:37:00 -- common/autotest_common.sh@895 -- # return 0 00:17:56.838 05:37:00 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:56.838 05:37:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:56.838 05:37:00 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:56.838 05:37:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:56.838 05:37:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:56.838 05:37:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:56.838 05:37:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:56.838 05:37:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:56.838 05:37:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:56.838 05:37:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:56.838 05:37:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:56.838 05:37:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:56.838 05:37:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.838 05:37:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.096 05:37:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:57.096 "name": "Existed_Raid", 00:17:57.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.096 "strip_size_kb": 64, 00:17:57.096 "state": "configuring", 00:17:57.096 "raid_level": "raid0", 00:17:57.096 "superblock": false, 00:17:57.096 "num_base_bdevs": 4, 00:17:57.096 "num_base_bdevs_discovered": 3, 00:17:57.096 "num_base_bdevs_operational": 4, 00:17:57.096 "base_bdevs_list": [ 00:17:57.096 { 00:17:57.096 "name": "BaseBdev1", 00:17:57.096 "uuid": "bb343c52-735f-4355-a267-d45e68e9806f", 00:17:57.096 "is_configured": true, 00:17:57.096 "data_offset": 0, 00:17:57.096 "data_size": 65536 00:17:57.096 }, 00:17:57.096 { 00:17:57.096 "name": "BaseBdev2", 00:17:57.096 "uuid": "0db8d642-fc35-489e-a0e3-524d4f391dbe", 00:17:57.096 "is_configured": true, 00:17:57.096 "data_offset": 0, 00:17:57.096 "data_size": 65536 00:17:57.096 }, 00:17:57.096 { 00:17:57.096 "name": "BaseBdev3", 00:17:57.096 "uuid": "6f609147-1d0e-46fd-8170-e104c06b2fb7", 00:17:57.096 "is_configured": true, 00:17:57.096 "data_offset": 0, 00:17:57.096 "data_size": 65536 00:17:57.096 }, 00:17:57.096 { 00:17:57.096 "name": "BaseBdev4", 00:17:57.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.096 "is_configured": false, 00:17:57.096 "data_offset": 0, 00:17:57.096 "data_size": 0 00:17:57.096 } 00:17:57.096 ] 00:17:57.096 }' 00:17:57.096 05:37:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:57.096 05:37:00 -- common/autotest_common.sh@10 -- # set +x 00:17:57.665 05:37:01 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:57.924 [2024-10-07 05:37:01.761301] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:57.924 [2024-10-07 05:37:01.761356] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:17:57.924 [2024-10-07 05:37:01.761366] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:57.924 [2024-10-07 05:37:01.761505] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:17:57.924 [2024-10-07 05:37:01.761889] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:17:57.924 [2024-10-07 05:37:01.761915] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:17:57.924 [2024-10-07 05:37:01.762196] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.924 BaseBdev4 00:17:57.924 05:37:01 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:57.924 05:37:01 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:17:57.924 05:37:01 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:57.924 05:37:01 -- common/autotest_common.sh@889 -- # local i 00:17:57.924 05:37:01 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:57.924 05:37:01 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:57.924 05:37:01 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:58.182 05:37:02 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:58.441 [ 00:17:58.441 { 00:17:58.441 "name": "BaseBdev4", 00:17:58.441 "aliases": [ 00:17:58.441 "b996ea1f-f65e-4cd5-ad8d-95ada64e1497" 00:17:58.441 ], 00:17:58.441 "product_name": "Malloc disk", 00:17:58.441 "block_size": 512, 00:17:58.441 "num_blocks": 65536, 00:17:58.441 "uuid": "b996ea1f-f65e-4cd5-ad8d-95ada64e1497", 00:17:58.441 "assigned_rate_limits": { 00:17:58.441 "rw_ios_per_sec": 0, 00:17:58.441 "rw_mbytes_per_sec": 0, 00:17:58.441 "r_mbytes_per_sec": 0, 00:17:58.441 "w_mbytes_per_sec": 0 00:17:58.441 }, 00:17:58.441 "claimed": true, 00:17:58.441 "claim_type": "exclusive_write", 00:17:58.441 "zoned": false, 00:17:58.441 "supported_io_types": { 00:17:58.441 "read": true, 00:17:58.441 "write": true, 00:17:58.441 "unmap": true, 00:17:58.441 "write_zeroes": true, 00:17:58.441 "flush": true, 00:17:58.441 "reset": true, 00:17:58.441 "compare": false, 00:17:58.441 "compare_and_write": false, 00:17:58.441 "abort": true, 00:17:58.441 "nvme_admin": false, 00:17:58.441 "nvme_io": false 00:17:58.441 }, 00:17:58.441 "memory_domains": [ 00:17:58.441 { 00:17:58.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.441 "dma_device_type": 2 00:17:58.441 } 00:17:58.441 ], 00:17:58.441 "driver_specific": {} 00:17:58.441 } 00:17:58.441 ] 00:17:58.441 05:37:02 -- common/autotest_common.sh@895 -- # return 0 00:17:58.441 05:37:02 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:58.441 05:37:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:58.441 05:37:02 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:58.441 05:37:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:58.441 05:37:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:58.441 05:37:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:58.441 05:37:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:58.441 05:37:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:58.441 05:37:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:58.441 05:37:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:58.441 05:37:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:58.441 05:37:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:58.441 05:37:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.441 05:37:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.700 05:37:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:58.700 "name": "Existed_Raid", 00:17:58.700 "uuid": "7c477ab9-b5af-4ec0-afc3-6b133368a47b", 00:17:58.700 "strip_size_kb": 64, 00:17:58.700 "state": "online", 00:17:58.700 "raid_level": "raid0", 00:17:58.700 "superblock": false, 00:17:58.700 "num_base_bdevs": 4, 00:17:58.700 "num_base_bdevs_discovered": 4, 00:17:58.700 "num_base_bdevs_operational": 4, 00:17:58.700 "base_bdevs_list": [ 00:17:58.700 { 00:17:58.700 "name": "BaseBdev1", 00:17:58.700 "uuid": "bb343c52-735f-4355-a267-d45e68e9806f", 00:17:58.700 "is_configured": true, 00:17:58.700 "data_offset": 0, 00:17:58.700 "data_size": 65536 00:17:58.700 }, 00:17:58.700 { 00:17:58.700 "name": "BaseBdev2", 00:17:58.700 "uuid": "0db8d642-fc35-489e-a0e3-524d4f391dbe", 00:17:58.700 "is_configured": true, 00:17:58.700 "data_offset": 0, 00:17:58.700 "data_size": 65536 00:17:58.700 }, 00:17:58.700 { 00:17:58.700 "name": "BaseBdev3", 00:17:58.700 "uuid": "6f609147-1d0e-46fd-8170-e104c06b2fb7", 00:17:58.700 "is_configured": true, 00:17:58.700 "data_offset": 0, 00:17:58.700 "data_size": 65536 00:17:58.700 }, 00:17:58.700 { 00:17:58.700 "name": "BaseBdev4", 00:17:58.700 "uuid": "b996ea1f-f65e-4cd5-ad8d-95ada64e1497", 00:17:58.700 "is_configured": true, 00:17:58.700 "data_offset": 0, 00:17:58.700 "data_size": 65536 00:17:58.700 } 00:17:58.700 ] 00:17:58.700 }' 00:17:58.700 05:37:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:58.700 05:37:02 -- common/autotest_common.sh@10 -- # set +x 00:17:59.267 05:37:03 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:59.525 [2024-10-07 05:37:03.305687] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:59.525 [2024-10-07 05:37:03.305715] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:59.525 [2024-10-07 05:37:03.305766] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:59.525 05:37:03 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:59.525 05:37:03 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:17:59.525 05:37:03 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:59.525 05:37:03 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:59.525 05:37:03 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:59.525 05:37:03 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:17:59.525 05:37:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:59.525 05:37:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:59.525 05:37:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:59.525 05:37:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:59.525 05:37:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:59.525 05:37:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:59.525 05:37:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:59.525 05:37:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:59.525 05:37:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:59.525 05:37:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.525 05:37:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.784 05:37:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:59.784 "name": "Existed_Raid", 00:17:59.784 "uuid": "7c477ab9-b5af-4ec0-afc3-6b133368a47b", 00:17:59.784 "strip_size_kb": 64, 00:17:59.784 "state": "offline", 00:17:59.784 "raid_level": "raid0", 00:17:59.784 "superblock": false, 00:17:59.784 "num_base_bdevs": 4, 00:17:59.784 "num_base_bdevs_discovered": 3, 00:17:59.784 "num_base_bdevs_operational": 3, 00:17:59.784 "base_bdevs_list": [ 00:17:59.784 { 00:17:59.784 "name": null, 00:17:59.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.784 "is_configured": false, 00:17:59.784 "data_offset": 0, 00:17:59.784 "data_size": 65536 00:17:59.784 }, 00:17:59.784 { 00:17:59.784 "name": "BaseBdev2", 00:17:59.784 "uuid": "0db8d642-fc35-489e-a0e3-524d4f391dbe", 00:17:59.784 "is_configured": true, 00:17:59.784 "data_offset": 0, 00:17:59.784 "data_size": 65536 00:17:59.784 }, 00:17:59.784 { 00:17:59.784 "name": "BaseBdev3", 00:17:59.784 "uuid": "6f609147-1d0e-46fd-8170-e104c06b2fb7", 00:17:59.784 "is_configured": true, 00:17:59.784 "data_offset": 0, 00:17:59.784 "data_size": 65536 00:17:59.784 }, 00:17:59.784 { 00:17:59.784 "name": "BaseBdev4", 00:17:59.784 "uuid": "b996ea1f-f65e-4cd5-ad8d-95ada64e1497", 00:17:59.784 "is_configured": true, 00:17:59.784 "data_offset": 0, 00:17:59.784 "data_size": 65536 00:17:59.784 } 00:17:59.784 ] 00:17:59.784 }' 00:17:59.784 05:37:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:59.784 05:37:03 -- common/autotest_common.sh@10 -- # set +x 00:18:00.351 05:37:04 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:00.351 05:37:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:00.351 05:37:04 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.351 05:37:04 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:00.609 05:37:04 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:00.609 05:37:04 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:00.609 05:37:04 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:00.867 [2024-10-07 05:37:04.770967] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:01.126 05:37:04 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:01.126 05:37:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:01.126 05:37:04 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.126 05:37:04 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:01.385 05:37:05 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:01.385 05:37:05 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:01.385 05:37:05 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:01.385 [2024-10-07 05:37:05.346242] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:01.644 05:37:05 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:01.644 05:37:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:01.644 05:37:05 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.644 05:37:05 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:01.902 05:37:05 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:01.902 05:37:05 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:01.902 05:37:05 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:02.161 [2024-10-07 05:37:05.906611] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:02.161 [2024-10-07 05:37:05.906674] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:18:02.161 05:37:05 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:02.161 05:37:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:02.161 05:37:05 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.161 05:37:05 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:02.419 05:37:06 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:02.419 05:37:06 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:02.419 05:37:06 -- bdev/bdev_raid.sh@287 -- # killprocess 149371 00:18:02.419 05:37:06 -- common/autotest_common.sh@926 -- # '[' -z 149371 ']' 00:18:02.419 05:37:06 -- common/autotest_common.sh@930 -- # kill -0 149371 00:18:02.419 05:37:06 -- common/autotest_common.sh@931 -- # uname 00:18:02.419 05:37:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:02.419 05:37:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 149371 00:18:02.419 killing process with pid 149371 00:18:02.419 05:37:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:02.419 05:37:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:02.419 05:37:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 149371' 00:18:02.419 05:37:06 -- common/autotest_common.sh@945 -- # kill 149371 00:18:02.419 05:37:06 -- common/autotest_common.sh@950 -- # wait 149371 00:18:02.419 [2024-10-07 05:37:06.207132] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:02.419 [2024-10-07 05:37:06.207621] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:03.354 ************************************ 00:18:03.354 END TEST raid_state_function_test 00:18:03.354 ************************************ 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:03.354 00:18:03.354 real 0m14.410s 00:18:03.354 user 0m25.573s 00:18:03.354 sys 0m1.773s 00:18:03.354 05:37:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:03.354 05:37:07 -- common/autotest_common.sh@10 -- # set +x 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:18:03.354 05:37:07 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:03.354 05:37:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:03.354 05:37:07 -- common/autotest_common.sh@10 -- # set +x 00:18:03.354 ************************************ 00:18:03.354 START TEST raid_state_function_test_sb 00:18:03.354 ************************************ 00:18:03.354 05:37:07 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 true 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@226 -- # raid_pid=150389 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 150389' 00:18:03.354 Process raid pid: 150389 00:18:03.354 05:37:07 -- bdev/bdev_raid.sh@228 -- # waitforlisten 150389 /var/tmp/spdk-raid.sock 00:18:03.354 05:37:07 -- common/autotest_common.sh@819 -- # '[' -z 150389 ']' 00:18:03.354 05:37:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:03.354 05:37:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:03.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:03.354 05:37:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:03.354 05:37:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:03.354 05:37:07 -- common/autotest_common.sh@10 -- # set +x 00:18:03.613 [2024-10-07 05:37:07.342062] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:03.613 [2024-10-07 05:37:07.342252] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.613 [2024-10-07 05:37:07.509739] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.872 [2024-10-07 05:37:07.710792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.131 [2024-10-07 05:37:07.903427] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:04.389 05:37:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:04.389 05:37:08 -- common/autotest_common.sh@852 -- # return 0 00:18:04.389 05:37:08 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:04.648 [2024-10-07 05:37:08.482596] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:04.648 [2024-10-07 05:37:08.482702] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:04.648 [2024-10-07 05:37:08.482716] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:04.648 [2024-10-07 05:37:08.482740] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:04.648 [2024-10-07 05:37:08.482747] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:04.648 [2024-10-07 05:37:08.482793] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:04.648 [2024-10-07 05:37:08.482802] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:04.648 [2024-10-07 05:37:08.482824] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:04.648 05:37:08 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:04.648 05:37:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:04.648 05:37:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:04.648 05:37:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:04.648 05:37:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:04.648 05:37:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:04.648 05:37:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:04.648 05:37:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:04.648 05:37:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:04.648 05:37:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:04.648 05:37:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.648 05:37:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:04.907 05:37:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:04.907 "name": "Existed_Raid", 00:18:04.907 "uuid": "14f3a8c4-21e8-4b60-ba2e-f06e23662a13", 00:18:04.907 "strip_size_kb": 64, 00:18:04.907 "state": "configuring", 00:18:04.907 "raid_level": "raid0", 00:18:04.907 "superblock": true, 00:18:04.907 "num_base_bdevs": 4, 00:18:04.907 "num_base_bdevs_discovered": 0, 00:18:04.907 "num_base_bdevs_operational": 4, 00:18:04.907 "base_bdevs_list": [ 00:18:04.907 { 00:18:04.907 "name": "BaseBdev1", 00:18:04.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.907 "is_configured": false, 00:18:04.907 "data_offset": 0, 00:18:04.908 "data_size": 0 00:18:04.908 }, 00:18:04.908 { 00:18:04.908 "name": "BaseBdev2", 00:18:04.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.908 "is_configured": false, 00:18:04.908 "data_offset": 0, 00:18:04.908 "data_size": 0 00:18:04.908 }, 00:18:04.908 { 00:18:04.908 "name": "BaseBdev3", 00:18:04.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.908 "is_configured": false, 00:18:04.908 "data_offset": 0, 00:18:04.908 "data_size": 0 00:18:04.908 }, 00:18:04.908 { 00:18:04.908 "name": "BaseBdev4", 00:18:04.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.908 "is_configured": false, 00:18:04.908 "data_offset": 0, 00:18:04.908 "data_size": 0 00:18:04.908 } 00:18:04.908 ] 00:18:04.908 }' 00:18:04.908 05:37:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:04.908 05:37:08 -- common/autotest_common.sh@10 -- # set +x 00:18:05.474 05:37:09 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:05.733 [2024-10-07 05:37:09.646601] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:05.733 [2024-10-07 05:37:09.646646] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:18:05.733 05:37:09 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:05.991 [2024-10-07 05:37:09.878751] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:05.991 [2024-10-07 05:37:09.878889] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:05.991 [2024-10-07 05:37:09.878903] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:05.991 [2024-10-07 05:37:09.878948] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:05.991 [2024-10-07 05:37:09.878957] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:05.991 [2024-10-07 05:37:09.879007] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:05.991 [2024-10-07 05:37:09.879015] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:05.991 [2024-10-07 05:37:09.879047] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:05.991 05:37:09 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:06.250 [2024-10-07 05:37:10.117804] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:06.250 BaseBdev1 00:18:06.250 05:37:10 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:06.250 05:37:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:06.250 05:37:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:06.250 05:37:10 -- common/autotest_common.sh@889 -- # local i 00:18:06.250 05:37:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:06.250 05:37:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:06.250 05:37:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:06.508 05:37:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:06.767 [ 00:18:06.767 { 00:18:06.767 "name": "BaseBdev1", 00:18:06.767 "aliases": [ 00:18:06.767 "7dac09da-687f-4b09-9e22-874a1a19fc29" 00:18:06.767 ], 00:18:06.767 "product_name": "Malloc disk", 00:18:06.767 "block_size": 512, 00:18:06.767 "num_blocks": 65536, 00:18:06.767 "uuid": "7dac09da-687f-4b09-9e22-874a1a19fc29", 00:18:06.767 "assigned_rate_limits": { 00:18:06.767 "rw_ios_per_sec": 0, 00:18:06.767 "rw_mbytes_per_sec": 0, 00:18:06.767 "r_mbytes_per_sec": 0, 00:18:06.767 "w_mbytes_per_sec": 0 00:18:06.767 }, 00:18:06.767 "claimed": true, 00:18:06.767 "claim_type": "exclusive_write", 00:18:06.767 "zoned": false, 00:18:06.767 "supported_io_types": { 00:18:06.767 "read": true, 00:18:06.767 "write": true, 00:18:06.767 "unmap": true, 00:18:06.767 "write_zeroes": true, 00:18:06.767 "flush": true, 00:18:06.767 "reset": true, 00:18:06.767 "compare": false, 00:18:06.767 "compare_and_write": false, 00:18:06.767 "abort": true, 00:18:06.767 "nvme_admin": false, 00:18:06.767 "nvme_io": false 00:18:06.767 }, 00:18:06.767 "memory_domains": [ 00:18:06.767 { 00:18:06.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.767 "dma_device_type": 2 00:18:06.767 } 00:18:06.767 ], 00:18:06.767 "driver_specific": {} 00:18:06.767 } 00:18:06.767 ] 00:18:06.767 05:37:10 -- common/autotest_common.sh@895 -- # return 0 00:18:06.767 05:37:10 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:06.767 05:37:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:06.767 05:37:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:06.767 05:37:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:06.767 05:37:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:06.767 05:37:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:06.767 05:37:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:06.767 05:37:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:06.767 05:37:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:06.767 05:37:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:06.767 05:37:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.767 05:37:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.026 05:37:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:07.026 "name": "Existed_Raid", 00:18:07.026 "uuid": "8cadfef0-224e-4cb2-9a99-31d7d0f597c9", 00:18:07.026 "strip_size_kb": 64, 00:18:07.026 "state": "configuring", 00:18:07.026 "raid_level": "raid0", 00:18:07.026 "superblock": true, 00:18:07.026 "num_base_bdevs": 4, 00:18:07.026 "num_base_bdevs_discovered": 1, 00:18:07.026 "num_base_bdevs_operational": 4, 00:18:07.026 "base_bdevs_list": [ 00:18:07.026 { 00:18:07.026 "name": "BaseBdev1", 00:18:07.026 "uuid": "7dac09da-687f-4b09-9e22-874a1a19fc29", 00:18:07.026 "is_configured": true, 00:18:07.026 "data_offset": 2048, 00:18:07.026 "data_size": 63488 00:18:07.026 }, 00:18:07.026 { 00:18:07.026 "name": "BaseBdev2", 00:18:07.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.026 "is_configured": false, 00:18:07.026 "data_offset": 0, 00:18:07.026 "data_size": 0 00:18:07.026 }, 00:18:07.026 { 00:18:07.026 "name": "BaseBdev3", 00:18:07.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.026 "is_configured": false, 00:18:07.026 "data_offset": 0, 00:18:07.026 "data_size": 0 00:18:07.026 }, 00:18:07.026 { 00:18:07.026 "name": "BaseBdev4", 00:18:07.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.026 "is_configured": false, 00:18:07.026 "data_offset": 0, 00:18:07.026 "data_size": 0 00:18:07.026 } 00:18:07.026 ] 00:18:07.026 }' 00:18:07.026 05:37:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:07.026 05:37:10 -- common/autotest_common.sh@10 -- # set +x 00:18:07.615 05:37:11 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:07.885 [2024-10-07 05:37:11.646111] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:07.885 [2024-10-07 05:37:11.646191] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:07.885 05:37:11 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:07.885 05:37:11 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:08.143 05:37:12 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:08.401 BaseBdev1 00:18:08.401 05:37:12 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:08.401 05:37:12 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:08.401 05:37:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:08.401 05:37:12 -- common/autotest_common.sh@889 -- # local i 00:18:08.401 05:37:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:08.401 05:37:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:08.401 05:37:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:08.659 05:37:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:08.918 [ 00:18:08.918 { 00:18:08.918 "name": "BaseBdev1", 00:18:08.918 "aliases": [ 00:18:08.918 "17363bd5-691e-4241-b2b1-d70666d4cd2b" 00:18:08.918 ], 00:18:08.918 "product_name": "Malloc disk", 00:18:08.918 "block_size": 512, 00:18:08.918 "num_blocks": 65536, 00:18:08.918 "uuid": "17363bd5-691e-4241-b2b1-d70666d4cd2b", 00:18:08.918 "assigned_rate_limits": { 00:18:08.918 "rw_ios_per_sec": 0, 00:18:08.918 "rw_mbytes_per_sec": 0, 00:18:08.918 "r_mbytes_per_sec": 0, 00:18:08.918 "w_mbytes_per_sec": 0 00:18:08.918 }, 00:18:08.918 "claimed": false, 00:18:08.918 "zoned": false, 00:18:08.918 "supported_io_types": { 00:18:08.918 "read": true, 00:18:08.918 "write": true, 00:18:08.918 "unmap": true, 00:18:08.918 "write_zeroes": true, 00:18:08.918 "flush": true, 00:18:08.918 "reset": true, 00:18:08.918 "compare": false, 00:18:08.918 "compare_and_write": false, 00:18:08.918 "abort": true, 00:18:08.918 "nvme_admin": false, 00:18:08.918 "nvme_io": false 00:18:08.918 }, 00:18:08.918 "memory_domains": [ 00:18:08.918 { 00:18:08.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.918 "dma_device_type": 2 00:18:08.918 } 00:18:08.918 ], 00:18:08.918 "driver_specific": {} 00:18:08.918 } 00:18:08.918 ] 00:18:08.918 05:37:12 -- common/autotest_common.sh@895 -- # return 0 00:18:08.918 05:37:12 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:09.177 [2024-10-07 05:37:13.067616] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:09.177 [2024-10-07 05:37:13.069770] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:09.177 [2024-10-07 05:37:13.069853] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:09.177 [2024-10-07 05:37:13.069866] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:09.177 [2024-10-07 05:37:13.069891] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:09.177 [2024-10-07 05:37:13.069899] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:09.177 [2024-10-07 05:37:13.069916] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:09.177 05:37:13 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:09.177 05:37:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:09.177 05:37:13 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:09.177 05:37:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:09.177 05:37:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:09.177 05:37:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:09.177 05:37:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:09.177 05:37:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:09.177 05:37:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:09.177 05:37:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:09.177 05:37:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:09.177 05:37:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:09.177 05:37:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.177 05:37:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.435 05:37:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:09.435 "name": "Existed_Raid", 00:18:09.435 "uuid": "5fa150ab-de0c-4766-b91a-0eba2e025d0c", 00:18:09.435 "strip_size_kb": 64, 00:18:09.435 "state": "configuring", 00:18:09.435 "raid_level": "raid0", 00:18:09.435 "superblock": true, 00:18:09.435 "num_base_bdevs": 4, 00:18:09.435 "num_base_bdevs_discovered": 1, 00:18:09.435 "num_base_bdevs_operational": 4, 00:18:09.435 "base_bdevs_list": [ 00:18:09.435 { 00:18:09.435 "name": "BaseBdev1", 00:18:09.435 "uuid": "17363bd5-691e-4241-b2b1-d70666d4cd2b", 00:18:09.435 "is_configured": true, 00:18:09.435 "data_offset": 2048, 00:18:09.435 "data_size": 63488 00:18:09.435 }, 00:18:09.435 { 00:18:09.435 "name": "BaseBdev2", 00:18:09.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.435 "is_configured": false, 00:18:09.435 "data_offset": 0, 00:18:09.435 "data_size": 0 00:18:09.435 }, 00:18:09.435 { 00:18:09.435 "name": "BaseBdev3", 00:18:09.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.435 "is_configured": false, 00:18:09.436 "data_offset": 0, 00:18:09.436 "data_size": 0 00:18:09.436 }, 00:18:09.436 { 00:18:09.436 "name": "BaseBdev4", 00:18:09.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.436 "is_configured": false, 00:18:09.436 "data_offset": 0, 00:18:09.436 "data_size": 0 00:18:09.436 } 00:18:09.436 ] 00:18:09.436 }' 00:18:09.436 05:37:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:09.436 05:37:13 -- common/autotest_common.sh@10 -- # set +x 00:18:10.002 05:37:13 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:10.569 [2024-10-07 05:37:14.256587] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:10.569 BaseBdev2 00:18:10.569 05:37:14 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:10.569 05:37:14 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:10.569 05:37:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:10.569 05:37:14 -- common/autotest_common.sh@889 -- # local i 00:18:10.569 05:37:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:10.569 05:37:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:10.569 05:37:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:10.569 05:37:14 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:10.828 [ 00:18:10.828 { 00:18:10.828 "name": "BaseBdev2", 00:18:10.828 "aliases": [ 00:18:10.828 "4b1c4707-ac65-43e2-962d-37fea893b191" 00:18:10.828 ], 00:18:10.828 "product_name": "Malloc disk", 00:18:10.828 "block_size": 512, 00:18:10.828 "num_blocks": 65536, 00:18:10.828 "uuid": "4b1c4707-ac65-43e2-962d-37fea893b191", 00:18:10.828 "assigned_rate_limits": { 00:18:10.828 "rw_ios_per_sec": 0, 00:18:10.828 "rw_mbytes_per_sec": 0, 00:18:10.828 "r_mbytes_per_sec": 0, 00:18:10.828 "w_mbytes_per_sec": 0 00:18:10.828 }, 00:18:10.828 "claimed": true, 00:18:10.828 "claim_type": "exclusive_write", 00:18:10.828 "zoned": false, 00:18:10.828 "supported_io_types": { 00:18:10.828 "read": true, 00:18:10.828 "write": true, 00:18:10.828 "unmap": true, 00:18:10.828 "write_zeroes": true, 00:18:10.828 "flush": true, 00:18:10.828 "reset": true, 00:18:10.828 "compare": false, 00:18:10.828 "compare_and_write": false, 00:18:10.828 "abort": true, 00:18:10.828 "nvme_admin": false, 00:18:10.828 "nvme_io": false 00:18:10.828 }, 00:18:10.828 "memory_domains": [ 00:18:10.828 { 00:18:10.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.828 "dma_device_type": 2 00:18:10.828 } 00:18:10.828 ], 00:18:10.828 "driver_specific": {} 00:18:10.828 } 00:18:10.828 ] 00:18:10.828 05:37:14 -- common/autotest_common.sh@895 -- # return 0 00:18:10.828 05:37:14 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:10.828 05:37:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:10.828 05:37:14 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:10.828 05:37:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:10.828 05:37:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:10.828 05:37:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:10.828 05:37:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:10.828 05:37:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:10.828 05:37:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:10.828 05:37:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:10.828 05:37:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:10.828 05:37:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:10.828 05:37:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.828 05:37:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.087 05:37:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:11.087 "name": "Existed_Raid", 00:18:11.087 "uuid": "5fa150ab-de0c-4766-b91a-0eba2e025d0c", 00:18:11.087 "strip_size_kb": 64, 00:18:11.087 "state": "configuring", 00:18:11.087 "raid_level": "raid0", 00:18:11.087 "superblock": true, 00:18:11.087 "num_base_bdevs": 4, 00:18:11.087 "num_base_bdevs_discovered": 2, 00:18:11.087 "num_base_bdevs_operational": 4, 00:18:11.087 "base_bdevs_list": [ 00:18:11.087 { 00:18:11.087 "name": "BaseBdev1", 00:18:11.087 "uuid": "17363bd5-691e-4241-b2b1-d70666d4cd2b", 00:18:11.087 "is_configured": true, 00:18:11.087 "data_offset": 2048, 00:18:11.087 "data_size": 63488 00:18:11.087 }, 00:18:11.087 { 00:18:11.087 "name": "BaseBdev2", 00:18:11.087 "uuid": "4b1c4707-ac65-43e2-962d-37fea893b191", 00:18:11.087 "is_configured": true, 00:18:11.087 "data_offset": 2048, 00:18:11.087 "data_size": 63488 00:18:11.087 }, 00:18:11.087 { 00:18:11.087 "name": "BaseBdev3", 00:18:11.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.087 "is_configured": false, 00:18:11.087 "data_offset": 0, 00:18:11.087 "data_size": 0 00:18:11.087 }, 00:18:11.087 { 00:18:11.087 "name": "BaseBdev4", 00:18:11.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.087 "is_configured": false, 00:18:11.087 "data_offset": 0, 00:18:11.087 "data_size": 0 00:18:11.087 } 00:18:11.087 ] 00:18:11.087 }' 00:18:11.087 05:37:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:11.087 05:37:14 -- common/autotest_common.sh@10 -- # set +x 00:18:11.653 05:37:15 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:11.912 [2024-10-07 05:37:15.808566] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:11.912 BaseBdev3 00:18:11.912 05:37:15 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:11.912 05:37:15 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:11.912 05:37:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:11.912 05:37:15 -- common/autotest_common.sh@889 -- # local i 00:18:11.912 05:37:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:11.912 05:37:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:11.912 05:37:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:12.170 05:37:16 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:12.429 [ 00:18:12.429 { 00:18:12.429 "name": "BaseBdev3", 00:18:12.429 "aliases": [ 00:18:12.429 "2179b159-7cd9-402e-a25e-ec364dc67891" 00:18:12.429 ], 00:18:12.429 "product_name": "Malloc disk", 00:18:12.429 "block_size": 512, 00:18:12.429 "num_blocks": 65536, 00:18:12.429 "uuid": "2179b159-7cd9-402e-a25e-ec364dc67891", 00:18:12.429 "assigned_rate_limits": { 00:18:12.429 "rw_ios_per_sec": 0, 00:18:12.429 "rw_mbytes_per_sec": 0, 00:18:12.429 "r_mbytes_per_sec": 0, 00:18:12.429 "w_mbytes_per_sec": 0 00:18:12.429 }, 00:18:12.429 "claimed": true, 00:18:12.429 "claim_type": "exclusive_write", 00:18:12.429 "zoned": false, 00:18:12.429 "supported_io_types": { 00:18:12.429 "read": true, 00:18:12.429 "write": true, 00:18:12.429 "unmap": true, 00:18:12.429 "write_zeroes": true, 00:18:12.429 "flush": true, 00:18:12.429 "reset": true, 00:18:12.429 "compare": false, 00:18:12.429 "compare_and_write": false, 00:18:12.429 "abort": true, 00:18:12.429 "nvme_admin": false, 00:18:12.429 "nvme_io": false 00:18:12.429 }, 00:18:12.429 "memory_domains": [ 00:18:12.429 { 00:18:12.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.429 "dma_device_type": 2 00:18:12.429 } 00:18:12.429 ], 00:18:12.429 "driver_specific": {} 00:18:12.429 } 00:18:12.429 ] 00:18:12.429 05:37:16 -- common/autotest_common.sh@895 -- # return 0 00:18:12.429 05:37:16 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:12.429 05:37:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:12.429 05:37:16 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:12.429 05:37:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:12.429 05:37:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:12.429 05:37:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:12.429 05:37:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:12.429 05:37:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:12.429 05:37:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:12.429 05:37:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:12.429 05:37:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:12.429 05:37:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:12.429 05:37:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.429 05:37:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.688 05:37:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:12.688 "name": "Existed_Raid", 00:18:12.688 "uuid": "5fa150ab-de0c-4766-b91a-0eba2e025d0c", 00:18:12.688 "strip_size_kb": 64, 00:18:12.688 "state": "configuring", 00:18:12.688 "raid_level": "raid0", 00:18:12.688 "superblock": true, 00:18:12.688 "num_base_bdevs": 4, 00:18:12.688 "num_base_bdevs_discovered": 3, 00:18:12.688 "num_base_bdevs_operational": 4, 00:18:12.688 "base_bdevs_list": [ 00:18:12.688 { 00:18:12.688 "name": "BaseBdev1", 00:18:12.688 "uuid": "17363bd5-691e-4241-b2b1-d70666d4cd2b", 00:18:12.688 "is_configured": true, 00:18:12.688 "data_offset": 2048, 00:18:12.688 "data_size": 63488 00:18:12.688 }, 00:18:12.688 { 00:18:12.688 "name": "BaseBdev2", 00:18:12.688 "uuid": "4b1c4707-ac65-43e2-962d-37fea893b191", 00:18:12.688 "is_configured": true, 00:18:12.688 "data_offset": 2048, 00:18:12.688 "data_size": 63488 00:18:12.688 }, 00:18:12.688 { 00:18:12.688 "name": "BaseBdev3", 00:18:12.688 "uuid": "2179b159-7cd9-402e-a25e-ec364dc67891", 00:18:12.688 "is_configured": true, 00:18:12.688 "data_offset": 2048, 00:18:12.688 "data_size": 63488 00:18:12.688 }, 00:18:12.688 { 00:18:12.688 "name": "BaseBdev4", 00:18:12.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.688 "is_configured": false, 00:18:12.688 "data_offset": 0, 00:18:12.688 "data_size": 0 00:18:12.688 } 00:18:12.688 ] 00:18:12.688 }' 00:18:12.688 05:37:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:12.688 05:37:16 -- common/autotest_common.sh@10 -- # set +x 00:18:13.256 05:37:17 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:13.515 [2024-10-07 05:37:17.321033] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:13.515 [2024-10-07 05:37:17.321293] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:18:13.515 [2024-10-07 05:37:17.321307] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:13.515 [2024-10-07 05:37:17.321468] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:18:13.515 [2024-10-07 05:37:17.321852] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:18:13.515 [2024-10-07 05:37:17.321873] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:18:13.515 [2024-10-07 05:37:17.322031] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.515 BaseBdev4 00:18:13.515 05:37:17 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:13.515 05:37:17 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:13.515 05:37:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:13.515 05:37:17 -- common/autotest_common.sh@889 -- # local i 00:18:13.515 05:37:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:13.515 05:37:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:13.515 05:37:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:13.773 05:37:17 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:14.030 [ 00:18:14.030 { 00:18:14.030 "name": "BaseBdev4", 00:18:14.030 "aliases": [ 00:18:14.030 "d299a94b-d7a3-4f95-88a2-52c6b219c41e" 00:18:14.030 ], 00:18:14.030 "product_name": "Malloc disk", 00:18:14.030 "block_size": 512, 00:18:14.030 "num_blocks": 65536, 00:18:14.030 "uuid": "d299a94b-d7a3-4f95-88a2-52c6b219c41e", 00:18:14.030 "assigned_rate_limits": { 00:18:14.030 "rw_ios_per_sec": 0, 00:18:14.030 "rw_mbytes_per_sec": 0, 00:18:14.030 "r_mbytes_per_sec": 0, 00:18:14.030 "w_mbytes_per_sec": 0 00:18:14.030 }, 00:18:14.030 "claimed": true, 00:18:14.030 "claim_type": "exclusive_write", 00:18:14.030 "zoned": false, 00:18:14.030 "supported_io_types": { 00:18:14.030 "read": true, 00:18:14.030 "write": true, 00:18:14.030 "unmap": true, 00:18:14.030 "write_zeroes": true, 00:18:14.030 "flush": true, 00:18:14.030 "reset": true, 00:18:14.030 "compare": false, 00:18:14.030 "compare_and_write": false, 00:18:14.030 "abort": true, 00:18:14.030 "nvme_admin": false, 00:18:14.030 "nvme_io": false 00:18:14.030 }, 00:18:14.030 "memory_domains": [ 00:18:14.030 { 00:18:14.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.030 "dma_device_type": 2 00:18:14.030 } 00:18:14.030 ], 00:18:14.030 "driver_specific": {} 00:18:14.030 } 00:18:14.030 ] 00:18:14.030 05:37:17 -- common/autotest_common.sh@895 -- # return 0 00:18:14.030 05:37:17 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:14.030 05:37:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:14.030 05:37:17 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:14.030 05:37:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:14.030 05:37:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:14.030 05:37:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:14.030 05:37:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:14.030 05:37:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:14.030 05:37:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:14.030 05:37:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:14.030 05:37:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:14.030 05:37:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:14.030 05:37:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.030 05:37:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.287 05:37:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:14.287 "name": "Existed_Raid", 00:18:14.287 "uuid": "5fa150ab-de0c-4766-b91a-0eba2e025d0c", 00:18:14.288 "strip_size_kb": 64, 00:18:14.288 "state": "online", 00:18:14.288 "raid_level": "raid0", 00:18:14.288 "superblock": true, 00:18:14.288 "num_base_bdevs": 4, 00:18:14.288 "num_base_bdevs_discovered": 4, 00:18:14.288 "num_base_bdevs_operational": 4, 00:18:14.288 "base_bdevs_list": [ 00:18:14.288 { 00:18:14.288 "name": "BaseBdev1", 00:18:14.288 "uuid": "17363bd5-691e-4241-b2b1-d70666d4cd2b", 00:18:14.288 "is_configured": true, 00:18:14.288 "data_offset": 2048, 00:18:14.288 "data_size": 63488 00:18:14.288 }, 00:18:14.288 { 00:18:14.288 "name": "BaseBdev2", 00:18:14.288 "uuid": "4b1c4707-ac65-43e2-962d-37fea893b191", 00:18:14.288 "is_configured": true, 00:18:14.288 "data_offset": 2048, 00:18:14.288 "data_size": 63488 00:18:14.288 }, 00:18:14.288 { 00:18:14.288 "name": "BaseBdev3", 00:18:14.288 "uuid": "2179b159-7cd9-402e-a25e-ec364dc67891", 00:18:14.288 "is_configured": true, 00:18:14.288 "data_offset": 2048, 00:18:14.288 "data_size": 63488 00:18:14.288 }, 00:18:14.288 { 00:18:14.288 "name": "BaseBdev4", 00:18:14.288 "uuid": "d299a94b-d7a3-4f95-88a2-52c6b219c41e", 00:18:14.288 "is_configured": true, 00:18:14.288 "data_offset": 2048, 00:18:14.288 "data_size": 63488 00:18:14.288 } 00:18:14.288 ] 00:18:14.288 }' 00:18:14.288 05:37:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:14.288 05:37:18 -- common/autotest_common.sh@10 -- # set +x 00:18:14.853 05:37:18 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:15.111 [2024-10-07 05:37:18.885438] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:15.112 [2024-10-07 05:37:18.885469] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:15.112 [2024-10-07 05:37:18.885533] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:15.112 05:37:18 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:15.112 05:37:18 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:18:15.112 05:37:18 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:15.112 05:37:18 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:15.112 05:37:18 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:15.112 05:37:18 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:18:15.112 05:37:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:15.112 05:37:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:15.112 05:37:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:15.112 05:37:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:15.112 05:37:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:15.112 05:37:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:15.112 05:37:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:15.112 05:37:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:15.112 05:37:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:15.112 05:37:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.112 05:37:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.369 05:37:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:15.369 "name": "Existed_Raid", 00:18:15.369 "uuid": "5fa150ab-de0c-4766-b91a-0eba2e025d0c", 00:18:15.369 "strip_size_kb": 64, 00:18:15.369 "state": "offline", 00:18:15.369 "raid_level": "raid0", 00:18:15.369 "superblock": true, 00:18:15.369 "num_base_bdevs": 4, 00:18:15.369 "num_base_bdevs_discovered": 3, 00:18:15.369 "num_base_bdevs_operational": 3, 00:18:15.369 "base_bdevs_list": [ 00:18:15.369 { 00:18:15.369 "name": null, 00:18:15.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.369 "is_configured": false, 00:18:15.369 "data_offset": 2048, 00:18:15.369 "data_size": 63488 00:18:15.369 }, 00:18:15.369 { 00:18:15.369 "name": "BaseBdev2", 00:18:15.369 "uuid": "4b1c4707-ac65-43e2-962d-37fea893b191", 00:18:15.369 "is_configured": true, 00:18:15.369 "data_offset": 2048, 00:18:15.369 "data_size": 63488 00:18:15.370 }, 00:18:15.370 { 00:18:15.370 "name": "BaseBdev3", 00:18:15.370 "uuid": "2179b159-7cd9-402e-a25e-ec364dc67891", 00:18:15.370 "is_configured": true, 00:18:15.370 "data_offset": 2048, 00:18:15.370 "data_size": 63488 00:18:15.370 }, 00:18:15.370 { 00:18:15.370 "name": "BaseBdev4", 00:18:15.370 "uuid": "d299a94b-d7a3-4f95-88a2-52c6b219c41e", 00:18:15.370 "is_configured": true, 00:18:15.370 "data_offset": 2048, 00:18:15.370 "data_size": 63488 00:18:15.370 } 00:18:15.370 ] 00:18:15.370 }' 00:18:15.370 05:37:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:15.370 05:37:19 -- common/autotest_common.sh@10 -- # set +x 00:18:15.934 05:37:19 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:15.934 05:37:19 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:15.934 05:37:19 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:15.934 05:37:19 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.192 05:37:19 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:16.192 05:37:19 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:16.192 05:37:19 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:16.192 [2024-10-07 05:37:20.118790] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:16.452 05:37:20 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:16.452 05:37:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:16.452 05:37:20 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.452 05:37:20 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:16.711 05:37:20 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:16.711 05:37:20 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:16.711 05:37:20 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:16.711 [2024-10-07 05:37:20.647206] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:16.969 05:37:20 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:16.969 05:37:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:16.969 05:37:20 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:16.969 05:37:20 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.228 05:37:20 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:17.228 05:37:20 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:17.228 05:37:20 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:17.486 [2024-10-07 05:37:21.253265] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:17.486 [2024-10-07 05:37:21.253348] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:18:17.486 05:37:21 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:17.486 05:37:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:17.486 05:37:21 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.486 05:37:21 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:17.745 05:37:21 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:17.745 05:37:21 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:17.745 05:37:21 -- bdev/bdev_raid.sh@287 -- # killprocess 150389 00:18:17.745 05:37:21 -- common/autotest_common.sh@926 -- # '[' -z 150389 ']' 00:18:17.745 05:37:21 -- common/autotest_common.sh@930 -- # kill -0 150389 00:18:17.745 05:37:21 -- common/autotest_common.sh@931 -- # uname 00:18:17.745 05:37:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:17.745 05:37:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 150389 00:18:17.745 killing process with pid 150389 00:18:17.745 05:37:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:17.745 05:37:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:17.745 05:37:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 150389' 00:18:17.745 05:37:21 -- common/autotest_common.sh@945 -- # kill 150389 00:18:17.745 05:37:21 -- common/autotest_common.sh@950 -- # wait 150389 00:18:17.745 [2024-10-07 05:37:21.597366] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:17.745 [2024-10-07 05:37:21.597517] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:18.681 05:37:22 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:18.681 00:18:18.681 real 0m15.382s 00:18:18.681 user 0m27.164s 00:18:18.681 sys 0m1.962s 00:18:18.681 ************************************ 00:18:18.681 END TEST raid_state_function_test_sb 00:18:18.681 ************************************ 00:18:18.681 05:37:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:18.681 05:37:22 -- common/autotest_common.sh@10 -- # set +x 00:18:18.940 05:37:22 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:18:18.940 05:37:22 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:18.940 05:37:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:18.940 05:37:22 -- common/autotest_common.sh@10 -- # set +x 00:18:18.940 ************************************ 00:18:18.940 START TEST raid_superblock_test 00:18:18.940 ************************************ 00:18:18.940 05:37:22 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 4 00:18:18.940 05:37:22 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:18:18.940 05:37:22 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:18:18.940 05:37:22 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:18.940 05:37:22 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:18.940 05:37:22 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:18.940 05:37:22 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:18.940 05:37:22 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:18.940 05:37:22 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:18.940 05:37:22 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:18.940 05:37:22 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:18.940 05:37:22 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:18.940 05:37:22 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:18.940 05:37:22 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:18.940 05:37:22 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:18:18.940 05:37:22 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:18:18.940 05:37:22 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:18:18.940 05:37:22 -- bdev/bdev_raid.sh@357 -- # raid_pid=151387 00:18:18.940 05:37:22 -- bdev/bdev_raid.sh@358 -- # waitforlisten 151387 /var/tmp/spdk-raid.sock 00:18:18.940 05:37:22 -- common/autotest_common.sh@819 -- # '[' -z 151387 ']' 00:18:18.940 05:37:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:18.940 05:37:22 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:18.940 05:37:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:18.940 05:37:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:18.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:18.940 05:37:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:18.941 05:37:22 -- common/autotest_common.sh@10 -- # set +x 00:18:18.941 [2024-10-07 05:37:22.768319] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:18.941 [2024-10-07 05:37:22.768530] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151387 ] 00:18:19.200 [2024-10-07 05:37:22.938832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.459 [2024-10-07 05:37:23.213338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.459 [2024-10-07 05:37:23.406161] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:19.717 05:37:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:19.717 05:37:23 -- common/autotest_common.sh@852 -- # return 0 00:18:19.717 05:37:23 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:19.717 05:37:23 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:19.717 05:37:23 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:19.717 05:37:23 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:19.717 05:37:23 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:19.717 05:37:23 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:19.717 05:37:23 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:19.717 05:37:23 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:19.717 05:37:23 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:19.976 malloc1 00:18:19.976 05:37:23 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:20.234 [2024-10-07 05:37:24.160528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:20.234 [2024-10-07 05:37:24.160650] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.234 [2024-10-07 05:37:24.160685] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:18:20.234 [2024-10-07 05:37:24.160731] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.234 [2024-10-07 05:37:24.162845] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.234 [2024-10-07 05:37:24.162909] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:20.234 pt1 00:18:20.234 05:37:24 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:20.234 05:37:24 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:20.234 05:37:24 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:20.234 05:37:24 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:20.234 05:37:24 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:20.234 05:37:24 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:20.234 05:37:24 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:20.234 05:37:24 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:20.234 05:37:24 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:20.492 malloc2 00:18:20.492 05:37:24 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:20.769 [2024-10-07 05:37:24.596436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:20.769 [2024-10-07 05:37:24.596517] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.769 [2024-10-07 05:37:24.596563] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:20.769 [2024-10-07 05:37:24.596618] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.769 [2024-10-07 05:37:24.598966] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.769 [2024-10-07 05:37:24.599016] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:20.769 pt2 00:18:20.769 05:37:24 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:20.769 05:37:24 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:20.769 05:37:24 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:20.769 05:37:24 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:20.769 05:37:24 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:20.769 05:37:24 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:20.769 05:37:24 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:20.769 05:37:24 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:20.769 05:37:24 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:21.027 malloc3 00:18:21.028 05:37:24 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:21.287 [2024-10-07 05:37:25.021283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:21.287 [2024-10-07 05:37:25.021383] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.287 [2024-10-07 05:37:25.021437] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:18:21.287 [2024-10-07 05:37:25.021492] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.287 [2024-10-07 05:37:25.024020] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.287 [2024-10-07 05:37:25.024077] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:21.287 pt3 00:18:21.287 05:37:25 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:21.287 05:37:25 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:21.287 05:37:25 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:18:21.287 05:37:25 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:18:21.287 05:37:25 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:21.287 05:37:25 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:21.287 05:37:25 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:21.287 05:37:25 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:21.287 05:37:25 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:21.287 malloc4 00:18:21.287 05:37:25 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:21.546 [2024-10-07 05:37:25.438331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:21.546 [2024-10-07 05:37:25.438431] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.546 [2024-10-07 05:37:25.438470] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:18:21.546 [2024-10-07 05:37:25.438534] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.546 [2024-10-07 05:37:25.441019] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.546 [2024-10-07 05:37:25.441082] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:21.546 pt4 00:18:21.546 05:37:25 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:21.546 05:37:25 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:21.546 05:37:25 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:21.805 [2024-10-07 05:37:25.634430] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:21.805 [2024-10-07 05:37:25.636613] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:21.805 [2024-10-07 05:37:25.636695] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:21.805 [2024-10-07 05:37:25.636778] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:21.805 [2024-10-07 05:37:25.637037] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:18:21.805 [2024-10-07 05:37:25.637063] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:21.805 [2024-10-07 05:37:25.637185] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:18:21.805 [2024-10-07 05:37:25.637554] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:18:21.805 [2024-10-07 05:37:25.637579] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:18:21.806 [2024-10-07 05:37:25.637716] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.806 05:37:25 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:21.806 05:37:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:21.806 05:37:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:21.806 05:37:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:21.806 05:37:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:21.806 05:37:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:21.806 05:37:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:21.806 05:37:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:21.806 05:37:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:21.806 05:37:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:21.806 05:37:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.806 05:37:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.064 05:37:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:22.064 "name": "raid_bdev1", 00:18:22.064 "uuid": "842d0393-27cb-4e9e-a5b3-9e29f36f3735", 00:18:22.064 "strip_size_kb": 64, 00:18:22.064 "state": "online", 00:18:22.064 "raid_level": "raid0", 00:18:22.064 "superblock": true, 00:18:22.064 "num_base_bdevs": 4, 00:18:22.064 "num_base_bdevs_discovered": 4, 00:18:22.064 "num_base_bdevs_operational": 4, 00:18:22.064 "base_bdevs_list": [ 00:18:22.064 { 00:18:22.064 "name": "pt1", 00:18:22.064 "uuid": "167e2a1d-264c-54aa-b2e9-76a76f67623f", 00:18:22.064 "is_configured": true, 00:18:22.064 "data_offset": 2048, 00:18:22.064 "data_size": 63488 00:18:22.064 }, 00:18:22.064 { 00:18:22.064 "name": "pt2", 00:18:22.064 "uuid": "71645af9-e2da-5cc5-aae7-d4e533001bce", 00:18:22.064 "is_configured": true, 00:18:22.064 "data_offset": 2048, 00:18:22.064 "data_size": 63488 00:18:22.064 }, 00:18:22.064 { 00:18:22.064 "name": "pt3", 00:18:22.064 "uuid": "1266c586-65ae-5bc2-8fa6-9c9cfcffeaa9", 00:18:22.064 "is_configured": true, 00:18:22.064 "data_offset": 2048, 00:18:22.064 "data_size": 63488 00:18:22.064 }, 00:18:22.064 { 00:18:22.064 "name": "pt4", 00:18:22.064 "uuid": "02c21122-8132-576c-99d7-e7b423291c84", 00:18:22.064 "is_configured": true, 00:18:22.064 "data_offset": 2048, 00:18:22.064 "data_size": 63488 00:18:22.064 } 00:18:22.064 ] 00:18:22.064 }' 00:18:22.064 05:37:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:22.064 05:37:25 -- common/autotest_common.sh@10 -- # set +x 00:18:22.631 05:37:26 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:22.631 05:37:26 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:22.890 [2024-10-07 05:37:26.722790] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:22.890 05:37:26 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=842d0393-27cb-4e9e-a5b3-9e29f36f3735 00:18:22.890 05:37:26 -- bdev/bdev_raid.sh@380 -- # '[' -z 842d0393-27cb-4e9e-a5b3-9e29f36f3735 ']' 00:18:22.890 05:37:26 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:23.149 [2024-10-07 05:37:26.962685] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:23.149 [2024-10-07 05:37:26.962737] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:23.149 [2024-10-07 05:37:26.962842] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:23.149 [2024-10-07 05:37:26.962955] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:23.149 [2024-10-07 05:37:26.962967] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:18:23.149 05:37:26 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:23.149 05:37:26 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.407 05:37:27 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:23.407 05:37:27 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:23.407 05:37:27 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:23.407 05:37:27 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:23.700 05:37:27 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:23.700 05:37:27 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:23.700 05:37:27 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:23.700 05:37:27 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:23.962 05:37:27 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:23.963 05:37:27 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:24.221 05:37:28 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:24.221 05:37:28 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:24.480 05:37:28 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:24.480 05:37:28 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:24.480 05:37:28 -- common/autotest_common.sh@640 -- # local es=0 00:18:24.480 05:37:28 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:24.480 05:37:28 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:24.480 05:37:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:24.480 05:37:28 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:24.480 05:37:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:24.480 05:37:28 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:24.480 05:37:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:24.480 05:37:28 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:24.480 05:37:28 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:24.480 05:37:28 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:24.738 [2024-10-07 05:37:28.668596] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:24.738 [2024-10-07 05:37:28.670220] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:24.738 [2024-10-07 05:37:28.670276] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:24.738 [2024-10-07 05:37:28.670323] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:24.738 [2024-10-07 05:37:28.670381] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:24.738 [2024-10-07 05:37:28.670462] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:24.738 [2024-10-07 05:37:28.670536] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:18:24.738 [2024-10-07 05:37:28.670598] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:18:24.738 [2024-10-07 05:37:28.670625] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:24.738 [2024-10-07 05:37:28.670636] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring 00:18:24.738 request: 00:18:24.738 { 00:18:24.738 "name": "raid_bdev1", 00:18:24.738 "raid_level": "raid0", 00:18:24.738 "base_bdevs": [ 00:18:24.738 "malloc1", 00:18:24.738 "malloc2", 00:18:24.738 "malloc3", 00:18:24.738 "malloc4" 00:18:24.738 ], 00:18:24.738 "superblock": false, 00:18:24.738 "strip_size_kb": 64, 00:18:24.738 "method": "bdev_raid_create", 00:18:24.738 "req_id": 1 00:18:24.738 } 00:18:24.738 Got JSON-RPC error response 00:18:24.738 response: 00:18:24.738 { 00:18:24.738 "code": -17, 00:18:24.738 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:24.738 } 00:18:24.738 05:37:28 -- common/autotest_common.sh@643 -- # es=1 00:18:24.738 05:37:28 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:24.738 05:37:28 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:24.738 05:37:28 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:24.738 05:37:28 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.738 05:37:28 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:24.996 05:37:28 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:24.996 05:37:28 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:24.996 05:37:28 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:25.256 [2024-10-07 05:37:29.228736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:25.256 [2024-10-07 05:37:29.228826] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.256 [2024-10-07 05:37:29.228866] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:25.256 [2024-10-07 05:37:29.228898] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.256 [2024-10-07 05:37:29.231469] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.256 [2024-10-07 05:37:29.231558] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:25.256 [2024-10-07 05:37:29.231682] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:25.256 [2024-10-07 05:37:29.231745] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:25.256 pt1 00:18:25.515 05:37:29 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:18:25.515 05:37:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:25.515 05:37:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:25.515 05:37:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:25.515 05:37:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:25.515 05:37:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:25.515 05:37:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:25.515 05:37:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:25.515 05:37:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:25.515 05:37:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:25.515 05:37:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.515 05:37:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.515 05:37:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:25.515 "name": "raid_bdev1", 00:18:25.515 "uuid": "842d0393-27cb-4e9e-a5b3-9e29f36f3735", 00:18:25.515 "strip_size_kb": 64, 00:18:25.515 "state": "configuring", 00:18:25.515 "raid_level": "raid0", 00:18:25.515 "superblock": true, 00:18:25.515 "num_base_bdevs": 4, 00:18:25.515 "num_base_bdevs_discovered": 1, 00:18:25.515 "num_base_bdevs_operational": 4, 00:18:25.515 "base_bdevs_list": [ 00:18:25.515 { 00:18:25.515 "name": "pt1", 00:18:25.515 "uuid": "167e2a1d-264c-54aa-b2e9-76a76f67623f", 00:18:25.515 "is_configured": true, 00:18:25.515 "data_offset": 2048, 00:18:25.515 "data_size": 63488 00:18:25.515 }, 00:18:25.515 { 00:18:25.515 "name": null, 00:18:25.515 "uuid": "71645af9-e2da-5cc5-aae7-d4e533001bce", 00:18:25.515 "is_configured": false, 00:18:25.515 "data_offset": 2048, 00:18:25.515 "data_size": 63488 00:18:25.515 }, 00:18:25.515 { 00:18:25.515 "name": null, 00:18:25.515 "uuid": "1266c586-65ae-5bc2-8fa6-9c9cfcffeaa9", 00:18:25.515 "is_configured": false, 00:18:25.515 "data_offset": 2048, 00:18:25.515 "data_size": 63488 00:18:25.515 }, 00:18:25.515 { 00:18:25.515 "name": null, 00:18:25.515 "uuid": "02c21122-8132-576c-99d7-e7b423291c84", 00:18:25.515 "is_configured": false, 00:18:25.515 "data_offset": 2048, 00:18:25.515 "data_size": 63488 00:18:25.515 } 00:18:25.515 ] 00:18:25.515 }' 00:18:25.515 05:37:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:25.515 05:37:29 -- common/autotest_common.sh@10 -- # set +x 00:18:26.083 05:37:30 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:18:26.083 05:37:30 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:26.341 [2024-10-07 05:37:30.265007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:26.341 [2024-10-07 05:37:30.265132] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.341 [2024-10-07 05:37:30.265188] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:26.341 [2024-10-07 05:37:30.265216] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.341 [2024-10-07 05:37:30.265755] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.341 [2024-10-07 05:37:30.265802] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:26.341 [2024-10-07 05:37:30.265911] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:26.341 [2024-10-07 05:37:30.265940] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:26.341 pt2 00:18:26.341 05:37:30 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:26.599 [2024-10-07 05:37:30.533014] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:26.599 05:37:30 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:18:26.599 05:37:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:26.599 05:37:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:26.599 05:37:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:26.599 05:37:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:26.599 05:37:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:26.599 05:37:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:26.599 05:37:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:26.599 05:37:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:26.599 05:37:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:26.599 05:37:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.599 05:37:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.858 05:37:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:26.858 "name": "raid_bdev1", 00:18:26.858 "uuid": "842d0393-27cb-4e9e-a5b3-9e29f36f3735", 00:18:26.858 "strip_size_kb": 64, 00:18:26.858 "state": "configuring", 00:18:26.858 "raid_level": "raid0", 00:18:26.858 "superblock": true, 00:18:26.858 "num_base_bdevs": 4, 00:18:26.858 "num_base_bdevs_discovered": 1, 00:18:26.858 "num_base_bdevs_operational": 4, 00:18:26.858 "base_bdevs_list": [ 00:18:26.858 { 00:18:26.858 "name": "pt1", 00:18:26.858 "uuid": "167e2a1d-264c-54aa-b2e9-76a76f67623f", 00:18:26.858 "is_configured": true, 00:18:26.858 "data_offset": 2048, 00:18:26.858 "data_size": 63488 00:18:26.858 }, 00:18:26.858 { 00:18:26.858 "name": null, 00:18:26.858 "uuid": "71645af9-e2da-5cc5-aae7-d4e533001bce", 00:18:26.858 "is_configured": false, 00:18:26.858 "data_offset": 2048, 00:18:26.858 "data_size": 63488 00:18:26.858 }, 00:18:26.858 { 00:18:26.858 "name": null, 00:18:26.858 "uuid": "1266c586-65ae-5bc2-8fa6-9c9cfcffeaa9", 00:18:26.858 "is_configured": false, 00:18:26.858 "data_offset": 2048, 00:18:26.858 "data_size": 63488 00:18:26.858 }, 00:18:26.858 { 00:18:26.858 "name": null, 00:18:26.858 "uuid": "02c21122-8132-576c-99d7-e7b423291c84", 00:18:26.858 "is_configured": false, 00:18:26.858 "data_offset": 2048, 00:18:26.858 "data_size": 63488 00:18:26.858 } 00:18:26.858 ] 00:18:26.858 }' 00:18:26.858 05:37:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:26.858 05:37:30 -- common/autotest_common.sh@10 -- # set +x 00:18:27.792 05:37:31 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:27.792 05:37:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:27.793 05:37:31 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:27.793 [2024-10-07 05:37:31.709468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:27.793 [2024-10-07 05:37:31.709625] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.793 [2024-10-07 05:37:31.709678] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:27.793 [2024-10-07 05:37:31.709704] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.793 [2024-10-07 05:37:31.710325] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.793 [2024-10-07 05:37:31.710411] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:27.793 [2024-10-07 05:37:31.710562] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:27.793 [2024-10-07 05:37:31.710593] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:27.793 pt2 00:18:27.793 05:37:31 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:27.793 05:37:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:27.793 05:37:31 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:28.052 [2024-10-07 05:37:31.893430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:28.052 [2024-10-07 05:37:31.893548] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.052 [2024-10-07 05:37:31.893585] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:28.052 [2024-10-07 05:37:31.893615] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.052 [2024-10-07 05:37:31.894140] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.052 [2024-10-07 05:37:31.894209] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:28.052 [2024-10-07 05:37:31.894336] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:28.052 [2024-10-07 05:37:31.894363] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:28.052 pt3 00:18:28.052 05:37:31 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:28.052 05:37:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:28.052 05:37:31 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:28.311 [2024-10-07 05:37:32.145535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:28.311 [2024-10-07 05:37:32.145693] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.311 [2024-10-07 05:37:32.145747] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:28.311 [2024-10-07 05:37:32.145782] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.311 [2024-10-07 05:37:32.146369] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.311 [2024-10-07 05:37:32.146438] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:28.311 [2024-10-07 05:37:32.146624] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:28.311 [2024-10-07 05:37:32.146656] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:28.311 [2024-10-07 05:37:32.146823] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:18:28.311 [2024-10-07 05:37:32.146837] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:28.311 [2024-10-07 05:37:32.146943] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:28.311 [2024-10-07 05:37:32.147329] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:18:28.311 [2024-10-07 05:37:32.147368] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:18:28.311 [2024-10-07 05:37:32.147534] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.311 pt4 00:18:28.311 05:37:32 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:28.311 05:37:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:28.311 05:37:32 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:28.311 05:37:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:28.311 05:37:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:28.311 05:37:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:28.311 05:37:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:28.311 05:37:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:28.311 05:37:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:28.311 05:37:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:28.311 05:37:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:28.311 05:37:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:28.311 05:37:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.311 05:37:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.569 05:37:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:28.569 "name": "raid_bdev1", 00:18:28.569 "uuid": "842d0393-27cb-4e9e-a5b3-9e29f36f3735", 00:18:28.569 "strip_size_kb": 64, 00:18:28.569 "state": "online", 00:18:28.569 "raid_level": "raid0", 00:18:28.569 "superblock": true, 00:18:28.569 "num_base_bdevs": 4, 00:18:28.569 "num_base_bdevs_discovered": 4, 00:18:28.569 "num_base_bdevs_operational": 4, 00:18:28.569 "base_bdevs_list": [ 00:18:28.569 { 00:18:28.569 "name": "pt1", 00:18:28.569 "uuid": "167e2a1d-264c-54aa-b2e9-76a76f67623f", 00:18:28.569 "is_configured": true, 00:18:28.569 "data_offset": 2048, 00:18:28.569 "data_size": 63488 00:18:28.569 }, 00:18:28.569 { 00:18:28.569 "name": "pt2", 00:18:28.569 "uuid": "71645af9-e2da-5cc5-aae7-d4e533001bce", 00:18:28.569 "is_configured": true, 00:18:28.569 "data_offset": 2048, 00:18:28.569 "data_size": 63488 00:18:28.569 }, 00:18:28.569 { 00:18:28.569 "name": "pt3", 00:18:28.569 "uuid": "1266c586-65ae-5bc2-8fa6-9c9cfcffeaa9", 00:18:28.569 "is_configured": true, 00:18:28.569 "data_offset": 2048, 00:18:28.569 "data_size": 63488 00:18:28.569 }, 00:18:28.569 { 00:18:28.569 "name": "pt4", 00:18:28.569 "uuid": "02c21122-8132-576c-99d7-e7b423291c84", 00:18:28.569 "is_configured": true, 00:18:28.569 "data_offset": 2048, 00:18:28.569 "data_size": 63488 00:18:28.569 } 00:18:28.569 ] 00:18:28.569 }' 00:18:28.569 05:37:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:28.569 05:37:32 -- common/autotest_common.sh@10 -- # set +x 00:18:29.137 05:37:32 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:29.137 05:37:32 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:29.396 [2024-10-07 05:37:33.163310] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:29.396 05:37:33 -- bdev/bdev_raid.sh@430 -- # '[' 842d0393-27cb-4e9e-a5b3-9e29f36f3735 '!=' 842d0393-27cb-4e9e-a5b3-9e29f36f3735 ']' 00:18:29.396 05:37:33 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:18:29.396 05:37:33 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:29.396 05:37:33 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:29.396 05:37:33 -- bdev/bdev_raid.sh@511 -- # killprocess 151387 00:18:29.396 05:37:33 -- common/autotest_common.sh@926 -- # '[' -z 151387 ']' 00:18:29.396 05:37:33 -- common/autotest_common.sh@930 -- # kill -0 151387 00:18:29.396 05:37:33 -- common/autotest_common.sh@931 -- # uname 00:18:29.396 05:37:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:29.396 05:37:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 151387 00:18:29.396 05:37:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:29.396 05:37:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:29.396 05:37:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 151387' 00:18:29.396 killing process with pid 151387 00:18:29.396 05:37:33 -- common/autotest_common.sh@945 -- # kill 151387 00:18:29.396 [2024-10-07 05:37:33.204211] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:29.396 [2024-10-07 05:37:33.204296] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:29.396 05:37:33 -- common/autotest_common.sh@950 -- # wait 151387 00:18:29.396 [2024-10-07 05:37:33.204370] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:29.396 [2024-10-07 05:37:33.204381] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:18:29.655 [2024-10-07 05:37:33.455996] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:30.591 05:37:34 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:30.591 00:18:30.591 real 0m11.663s 00:18:30.591 user 0m20.231s 00:18:30.591 sys 0m1.539s 00:18:30.591 05:37:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:30.591 05:37:34 -- common/autotest_common.sh@10 -- # set +x 00:18:30.591 ************************************ 00:18:30.591 END TEST raid_superblock_test 00:18:30.591 ************************************ 00:18:30.591 05:37:34 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:30.591 05:37:34 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:18:30.591 05:37:34 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:30.591 05:37:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:30.591 05:37:34 -- common/autotest_common.sh@10 -- # set +x 00:18:30.591 ************************************ 00:18:30.591 START TEST raid_state_function_test 00:18:30.591 ************************************ 00:18:30.591 05:37:34 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 false 00:18:30.591 05:37:34 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@226 -- # raid_pid=152146 00:18:30.592 Process raid pid: 152146 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 152146' 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:30.592 05:37:34 -- bdev/bdev_raid.sh@228 -- # waitforlisten 152146 /var/tmp/spdk-raid.sock 00:18:30.592 05:37:34 -- common/autotest_common.sh@819 -- # '[' -z 152146 ']' 00:18:30.592 05:37:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:30.592 05:37:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:30.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:30.592 05:37:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:30.592 05:37:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:30.592 05:37:34 -- common/autotest_common.sh@10 -- # set +x 00:18:30.592 [2024-10-07 05:37:34.500075] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:30.592 [2024-10-07 05:37:34.500285] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.850 [2024-10-07 05:37:34.663756] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.109 [2024-10-07 05:37:34.888252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.109 [2024-10-07 05:37:35.087470] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:31.676 05:37:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:31.676 05:37:35 -- common/autotest_common.sh@852 -- # return 0 00:18:31.676 05:37:35 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:31.934 [2024-10-07 05:37:35.762569] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:31.934 [2024-10-07 05:37:35.762679] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:31.934 [2024-10-07 05:37:35.762711] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:31.934 [2024-10-07 05:37:35.762735] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:31.934 [2024-10-07 05:37:35.762743] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:31.934 [2024-10-07 05:37:35.762784] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:31.934 [2024-10-07 05:37:35.762794] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:31.934 [2024-10-07 05:37:35.762818] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:31.934 05:37:35 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:31.934 05:37:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:31.934 05:37:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:31.934 05:37:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:31.934 05:37:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:31.934 05:37:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:31.934 05:37:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:31.934 05:37:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:31.934 05:37:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:31.934 05:37:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:31.935 05:37:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.935 05:37:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.193 05:37:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:32.193 "name": "Existed_Raid", 00:18:32.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.193 "strip_size_kb": 64, 00:18:32.193 "state": "configuring", 00:18:32.193 "raid_level": "concat", 00:18:32.193 "superblock": false, 00:18:32.193 "num_base_bdevs": 4, 00:18:32.193 "num_base_bdevs_discovered": 0, 00:18:32.193 "num_base_bdevs_operational": 4, 00:18:32.193 "base_bdevs_list": [ 00:18:32.193 { 00:18:32.193 "name": "BaseBdev1", 00:18:32.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.193 "is_configured": false, 00:18:32.193 "data_offset": 0, 00:18:32.193 "data_size": 0 00:18:32.193 }, 00:18:32.193 { 00:18:32.193 "name": "BaseBdev2", 00:18:32.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.193 "is_configured": false, 00:18:32.193 "data_offset": 0, 00:18:32.193 "data_size": 0 00:18:32.193 }, 00:18:32.193 { 00:18:32.193 "name": "BaseBdev3", 00:18:32.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.193 "is_configured": false, 00:18:32.193 "data_offset": 0, 00:18:32.193 "data_size": 0 00:18:32.193 }, 00:18:32.193 { 00:18:32.193 "name": "BaseBdev4", 00:18:32.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.193 "is_configured": false, 00:18:32.193 "data_offset": 0, 00:18:32.193 "data_size": 0 00:18:32.193 } 00:18:32.193 ] 00:18:32.193 }' 00:18:32.193 05:37:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:32.193 05:37:36 -- common/autotest_common.sh@10 -- # set +x 00:18:32.761 05:37:36 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:33.019 [2024-10-07 05:37:36.834597] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:33.019 [2024-10-07 05:37:36.834641] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:18:33.020 05:37:36 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:33.279 [2024-10-07 05:37:37.030739] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:33.279 [2024-10-07 05:37:37.030824] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:33.279 [2024-10-07 05:37:37.030839] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:33.279 [2024-10-07 05:37:37.030867] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:33.279 [2024-10-07 05:37:37.030878] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:33.279 [2024-10-07 05:37:37.030917] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:33.279 [2024-10-07 05:37:37.030927] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:33.279 [2024-10-07 05:37:37.030953] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:33.279 05:37:37 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:33.279 [2024-10-07 05:37:37.253054] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:33.279 BaseBdev1 00:18:33.538 05:37:37 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:33.538 05:37:37 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:33.538 05:37:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:33.538 05:37:37 -- common/autotest_common.sh@889 -- # local i 00:18:33.538 05:37:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:33.538 05:37:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:33.538 05:37:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:33.538 05:37:37 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:33.797 [ 00:18:33.797 { 00:18:33.797 "name": "BaseBdev1", 00:18:33.797 "aliases": [ 00:18:33.797 "2a52da13-061e-4304-85a3-30dff25d7333" 00:18:33.797 ], 00:18:33.797 "product_name": "Malloc disk", 00:18:33.797 "block_size": 512, 00:18:33.797 "num_blocks": 65536, 00:18:33.797 "uuid": "2a52da13-061e-4304-85a3-30dff25d7333", 00:18:33.797 "assigned_rate_limits": { 00:18:33.797 "rw_ios_per_sec": 0, 00:18:33.797 "rw_mbytes_per_sec": 0, 00:18:33.797 "r_mbytes_per_sec": 0, 00:18:33.797 "w_mbytes_per_sec": 0 00:18:33.797 }, 00:18:33.797 "claimed": true, 00:18:33.797 "claim_type": "exclusive_write", 00:18:33.797 "zoned": false, 00:18:33.797 "supported_io_types": { 00:18:33.797 "read": true, 00:18:33.797 "write": true, 00:18:33.797 "unmap": true, 00:18:33.797 "write_zeroes": true, 00:18:33.797 "flush": true, 00:18:33.797 "reset": true, 00:18:33.797 "compare": false, 00:18:33.797 "compare_and_write": false, 00:18:33.797 "abort": true, 00:18:33.797 "nvme_admin": false, 00:18:33.797 "nvme_io": false 00:18:33.797 }, 00:18:33.797 "memory_domains": [ 00:18:33.797 { 00:18:33.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.797 "dma_device_type": 2 00:18:33.797 } 00:18:33.797 ], 00:18:33.797 "driver_specific": {} 00:18:33.797 } 00:18:33.797 ] 00:18:33.797 05:37:37 -- common/autotest_common.sh@895 -- # return 0 00:18:33.797 05:37:37 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:33.797 05:37:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:33.797 05:37:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:33.797 05:37:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:33.797 05:37:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:33.797 05:37:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:33.797 05:37:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:33.798 05:37:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:33.798 05:37:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:33.798 05:37:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:33.798 05:37:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.798 05:37:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.057 05:37:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:34.057 "name": "Existed_Raid", 00:18:34.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.057 "strip_size_kb": 64, 00:18:34.057 "state": "configuring", 00:18:34.057 "raid_level": "concat", 00:18:34.057 "superblock": false, 00:18:34.057 "num_base_bdevs": 4, 00:18:34.057 "num_base_bdevs_discovered": 1, 00:18:34.057 "num_base_bdevs_operational": 4, 00:18:34.057 "base_bdevs_list": [ 00:18:34.057 { 00:18:34.057 "name": "BaseBdev1", 00:18:34.057 "uuid": "2a52da13-061e-4304-85a3-30dff25d7333", 00:18:34.057 "is_configured": true, 00:18:34.057 "data_offset": 0, 00:18:34.057 "data_size": 65536 00:18:34.057 }, 00:18:34.057 { 00:18:34.057 "name": "BaseBdev2", 00:18:34.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.057 "is_configured": false, 00:18:34.057 "data_offset": 0, 00:18:34.057 "data_size": 0 00:18:34.057 }, 00:18:34.057 { 00:18:34.057 "name": "BaseBdev3", 00:18:34.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.057 "is_configured": false, 00:18:34.057 "data_offset": 0, 00:18:34.057 "data_size": 0 00:18:34.057 }, 00:18:34.057 { 00:18:34.057 "name": "BaseBdev4", 00:18:34.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.057 "is_configured": false, 00:18:34.057 "data_offset": 0, 00:18:34.057 "data_size": 0 00:18:34.057 } 00:18:34.057 ] 00:18:34.057 }' 00:18:34.057 05:37:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:34.057 05:37:37 -- common/autotest_common.sh@10 -- # set +x 00:18:34.623 05:37:38 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:34.881 [2024-10-07 05:37:38.677369] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:34.881 [2024-10-07 05:37:38.677439] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:34.881 05:37:38 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:34.881 05:37:38 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:35.139 [2024-10-07 05:37:38.877432] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:35.139 [2024-10-07 05:37:38.879058] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:35.139 [2024-10-07 05:37:38.879136] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:35.139 [2024-10-07 05:37:38.879151] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:35.139 [2024-10-07 05:37:38.879178] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:35.139 [2024-10-07 05:37:38.879189] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:35.139 [2024-10-07 05:37:38.879206] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:35.139 05:37:38 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:35.139 05:37:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:35.139 05:37:38 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:35.139 05:37:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:35.139 05:37:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:35.139 05:37:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:35.139 05:37:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:35.139 05:37:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:35.139 05:37:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:35.139 05:37:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:35.139 05:37:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:35.139 05:37:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:35.139 05:37:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.139 05:37:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.398 05:37:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:35.398 "name": "Existed_Raid", 00:18:35.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.398 "strip_size_kb": 64, 00:18:35.398 "state": "configuring", 00:18:35.398 "raid_level": "concat", 00:18:35.398 "superblock": false, 00:18:35.398 "num_base_bdevs": 4, 00:18:35.398 "num_base_bdevs_discovered": 1, 00:18:35.398 "num_base_bdevs_operational": 4, 00:18:35.398 "base_bdevs_list": [ 00:18:35.398 { 00:18:35.398 "name": "BaseBdev1", 00:18:35.398 "uuid": "2a52da13-061e-4304-85a3-30dff25d7333", 00:18:35.398 "is_configured": true, 00:18:35.398 "data_offset": 0, 00:18:35.398 "data_size": 65536 00:18:35.398 }, 00:18:35.398 { 00:18:35.398 "name": "BaseBdev2", 00:18:35.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.398 "is_configured": false, 00:18:35.398 "data_offset": 0, 00:18:35.398 "data_size": 0 00:18:35.398 }, 00:18:35.398 { 00:18:35.398 "name": "BaseBdev3", 00:18:35.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.398 "is_configured": false, 00:18:35.398 "data_offset": 0, 00:18:35.398 "data_size": 0 00:18:35.398 }, 00:18:35.398 { 00:18:35.398 "name": "BaseBdev4", 00:18:35.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.398 "is_configured": false, 00:18:35.398 "data_offset": 0, 00:18:35.398 "data_size": 0 00:18:35.398 } 00:18:35.398 ] 00:18:35.398 }' 00:18:35.398 05:37:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:35.398 05:37:39 -- common/autotest_common.sh@10 -- # set +x 00:18:35.964 05:37:39 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:36.221 [2024-10-07 05:37:40.123145] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:36.221 BaseBdev2 00:18:36.221 05:37:40 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:36.221 05:37:40 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:36.221 05:37:40 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:36.221 05:37:40 -- common/autotest_common.sh@889 -- # local i 00:18:36.222 05:37:40 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:36.222 05:37:40 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:36.222 05:37:40 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:36.479 05:37:40 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:36.736 [ 00:18:36.736 { 00:18:36.736 "name": "BaseBdev2", 00:18:36.736 "aliases": [ 00:18:36.736 "64569843-e046-4895-96cf-52d645158b02" 00:18:36.736 ], 00:18:36.736 "product_name": "Malloc disk", 00:18:36.736 "block_size": 512, 00:18:36.736 "num_blocks": 65536, 00:18:36.736 "uuid": "64569843-e046-4895-96cf-52d645158b02", 00:18:36.736 "assigned_rate_limits": { 00:18:36.736 "rw_ios_per_sec": 0, 00:18:36.736 "rw_mbytes_per_sec": 0, 00:18:36.736 "r_mbytes_per_sec": 0, 00:18:36.736 "w_mbytes_per_sec": 0 00:18:36.736 }, 00:18:36.736 "claimed": true, 00:18:36.736 "claim_type": "exclusive_write", 00:18:36.736 "zoned": false, 00:18:36.736 "supported_io_types": { 00:18:36.736 "read": true, 00:18:36.736 "write": true, 00:18:36.736 "unmap": true, 00:18:36.736 "write_zeroes": true, 00:18:36.736 "flush": true, 00:18:36.736 "reset": true, 00:18:36.736 "compare": false, 00:18:36.736 "compare_and_write": false, 00:18:36.736 "abort": true, 00:18:36.736 "nvme_admin": false, 00:18:36.736 "nvme_io": false 00:18:36.736 }, 00:18:36.736 "memory_domains": [ 00:18:36.736 { 00:18:36.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.736 "dma_device_type": 2 00:18:36.736 } 00:18:36.736 ], 00:18:36.736 "driver_specific": {} 00:18:36.736 } 00:18:36.736 ] 00:18:36.736 05:37:40 -- common/autotest_common.sh@895 -- # return 0 00:18:36.736 05:37:40 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:36.736 05:37:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:36.736 05:37:40 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:36.736 05:37:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:36.736 05:37:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:36.736 05:37:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:36.736 05:37:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:36.736 05:37:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:36.736 05:37:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:36.736 05:37:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:36.736 05:37:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:36.736 05:37:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:36.737 05:37:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.737 05:37:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:36.994 05:37:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:36.994 "name": "Existed_Raid", 00:18:36.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.994 "strip_size_kb": 64, 00:18:36.994 "state": "configuring", 00:18:36.994 "raid_level": "concat", 00:18:36.994 "superblock": false, 00:18:36.994 "num_base_bdevs": 4, 00:18:36.994 "num_base_bdevs_discovered": 2, 00:18:36.994 "num_base_bdevs_operational": 4, 00:18:36.994 "base_bdevs_list": [ 00:18:36.994 { 00:18:36.994 "name": "BaseBdev1", 00:18:36.994 "uuid": "2a52da13-061e-4304-85a3-30dff25d7333", 00:18:36.994 "is_configured": true, 00:18:36.994 "data_offset": 0, 00:18:36.994 "data_size": 65536 00:18:36.994 }, 00:18:36.994 { 00:18:36.994 "name": "BaseBdev2", 00:18:36.994 "uuid": "64569843-e046-4895-96cf-52d645158b02", 00:18:36.994 "is_configured": true, 00:18:36.994 "data_offset": 0, 00:18:36.994 "data_size": 65536 00:18:36.994 }, 00:18:36.994 { 00:18:36.994 "name": "BaseBdev3", 00:18:36.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.994 "is_configured": false, 00:18:36.994 "data_offset": 0, 00:18:36.994 "data_size": 0 00:18:36.994 }, 00:18:36.994 { 00:18:36.994 "name": "BaseBdev4", 00:18:36.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.994 "is_configured": false, 00:18:36.994 "data_offset": 0, 00:18:36.994 "data_size": 0 00:18:36.994 } 00:18:36.994 ] 00:18:36.994 }' 00:18:36.994 05:37:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:36.994 05:37:40 -- common/autotest_common.sh@10 -- # set +x 00:18:37.929 05:37:41 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:37.929 [2024-10-07 05:37:41.835887] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:37.929 BaseBdev3 00:18:37.929 05:37:41 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:37.929 05:37:41 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:37.929 05:37:41 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:37.929 05:37:41 -- common/autotest_common.sh@889 -- # local i 00:18:37.929 05:37:41 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:37.929 05:37:41 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:37.929 05:37:41 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:38.188 05:37:42 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:38.446 [ 00:18:38.446 { 00:18:38.446 "name": "BaseBdev3", 00:18:38.446 "aliases": [ 00:18:38.446 "11d8e7dd-25d6-48fc-86a1-815459d71b21" 00:18:38.446 ], 00:18:38.447 "product_name": "Malloc disk", 00:18:38.447 "block_size": 512, 00:18:38.447 "num_blocks": 65536, 00:18:38.447 "uuid": "11d8e7dd-25d6-48fc-86a1-815459d71b21", 00:18:38.447 "assigned_rate_limits": { 00:18:38.447 "rw_ios_per_sec": 0, 00:18:38.447 "rw_mbytes_per_sec": 0, 00:18:38.447 "r_mbytes_per_sec": 0, 00:18:38.447 "w_mbytes_per_sec": 0 00:18:38.447 }, 00:18:38.447 "claimed": true, 00:18:38.447 "claim_type": "exclusive_write", 00:18:38.447 "zoned": false, 00:18:38.447 "supported_io_types": { 00:18:38.447 "read": true, 00:18:38.447 "write": true, 00:18:38.447 "unmap": true, 00:18:38.447 "write_zeroes": true, 00:18:38.447 "flush": true, 00:18:38.447 "reset": true, 00:18:38.447 "compare": false, 00:18:38.447 "compare_and_write": false, 00:18:38.447 "abort": true, 00:18:38.447 "nvme_admin": false, 00:18:38.447 "nvme_io": false 00:18:38.447 }, 00:18:38.447 "memory_domains": [ 00:18:38.447 { 00:18:38.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.447 "dma_device_type": 2 00:18:38.447 } 00:18:38.447 ], 00:18:38.447 "driver_specific": {} 00:18:38.447 } 00:18:38.447 ] 00:18:38.447 05:37:42 -- common/autotest_common.sh@895 -- # return 0 00:18:38.447 05:37:42 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:38.447 05:37:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:38.447 05:37:42 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:38.447 05:37:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:38.447 05:37:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:38.447 05:37:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:38.447 05:37:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:38.447 05:37:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:38.447 05:37:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:38.447 05:37:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:38.447 05:37:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:38.447 05:37:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:38.447 05:37:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.447 05:37:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:38.705 05:37:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:38.705 "name": "Existed_Raid", 00:18:38.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.705 "strip_size_kb": 64, 00:18:38.705 "state": "configuring", 00:18:38.705 "raid_level": "concat", 00:18:38.706 "superblock": false, 00:18:38.706 "num_base_bdevs": 4, 00:18:38.706 "num_base_bdevs_discovered": 3, 00:18:38.706 "num_base_bdevs_operational": 4, 00:18:38.706 "base_bdevs_list": [ 00:18:38.706 { 00:18:38.706 "name": "BaseBdev1", 00:18:38.706 "uuid": "2a52da13-061e-4304-85a3-30dff25d7333", 00:18:38.706 "is_configured": true, 00:18:38.706 "data_offset": 0, 00:18:38.706 "data_size": 65536 00:18:38.706 }, 00:18:38.706 { 00:18:38.706 "name": "BaseBdev2", 00:18:38.706 "uuid": "64569843-e046-4895-96cf-52d645158b02", 00:18:38.706 "is_configured": true, 00:18:38.706 "data_offset": 0, 00:18:38.706 "data_size": 65536 00:18:38.706 }, 00:18:38.706 { 00:18:38.706 "name": "BaseBdev3", 00:18:38.706 "uuid": "11d8e7dd-25d6-48fc-86a1-815459d71b21", 00:18:38.706 "is_configured": true, 00:18:38.706 "data_offset": 0, 00:18:38.706 "data_size": 65536 00:18:38.706 }, 00:18:38.706 { 00:18:38.706 "name": "BaseBdev4", 00:18:38.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.706 "is_configured": false, 00:18:38.706 "data_offset": 0, 00:18:38.706 "data_size": 0 00:18:38.706 } 00:18:38.706 ] 00:18:38.706 }' 00:18:38.706 05:37:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:38.706 05:37:42 -- common/autotest_common.sh@10 -- # set +x 00:18:39.639 05:37:43 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:39.639 [2024-10-07 05:37:43.544461] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:39.639 [2024-10-07 05:37:43.544511] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:18:39.639 [2024-10-07 05:37:43.544522] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:39.639 [2024-10-07 05:37:43.544684] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:18:39.639 [2024-10-07 05:37:43.545032] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:18:39.639 [2024-10-07 05:37:43.545057] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:18:39.639 [2024-10-07 05:37:43.545309] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.639 BaseBdev4 00:18:39.639 05:37:43 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:39.639 05:37:43 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:39.639 05:37:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:39.639 05:37:43 -- common/autotest_common.sh@889 -- # local i 00:18:39.639 05:37:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:39.639 05:37:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:39.639 05:37:43 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:39.897 05:37:43 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:40.156 [ 00:18:40.156 { 00:18:40.156 "name": "BaseBdev4", 00:18:40.156 "aliases": [ 00:18:40.156 "08544175-c278-4d8a-afb1-badd88bb9924" 00:18:40.156 ], 00:18:40.156 "product_name": "Malloc disk", 00:18:40.156 "block_size": 512, 00:18:40.156 "num_blocks": 65536, 00:18:40.156 "uuid": "08544175-c278-4d8a-afb1-badd88bb9924", 00:18:40.156 "assigned_rate_limits": { 00:18:40.156 "rw_ios_per_sec": 0, 00:18:40.156 "rw_mbytes_per_sec": 0, 00:18:40.156 "r_mbytes_per_sec": 0, 00:18:40.156 "w_mbytes_per_sec": 0 00:18:40.156 }, 00:18:40.156 "claimed": true, 00:18:40.156 "claim_type": "exclusive_write", 00:18:40.156 "zoned": false, 00:18:40.156 "supported_io_types": { 00:18:40.156 "read": true, 00:18:40.156 "write": true, 00:18:40.156 "unmap": true, 00:18:40.156 "write_zeroes": true, 00:18:40.156 "flush": true, 00:18:40.156 "reset": true, 00:18:40.156 "compare": false, 00:18:40.156 "compare_and_write": false, 00:18:40.156 "abort": true, 00:18:40.156 "nvme_admin": false, 00:18:40.156 "nvme_io": false 00:18:40.156 }, 00:18:40.156 "memory_domains": [ 00:18:40.156 { 00:18:40.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.156 "dma_device_type": 2 00:18:40.156 } 00:18:40.156 ], 00:18:40.156 "driver_specific": {} 00:18:40.156 } 00:18:40.156 ] 00:18:40.156 05:37:44 -- common/autotest_common.sh@895 -- # return 0 00:18:40.156 05:37:44 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:40.156 05:37:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:40.156 05:37:44 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:40.156 05:37:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:40.156 05:37:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:40.156 05:37:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:40.157 05:37:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:40.157 05:37:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:40.157 05:37:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:40.157 05:37:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:40.157 05:37:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:40.157 05:37:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:40.157 05:37:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.157 05:37:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.415 05:37:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:40.415 "name": "Existed_Raid", 00:18:40.415 "uuid": "bf604c18-8e17-45cb-8ee6-c2633fefc5ed", 00:18:40.415 "strip_size_kb": 64, 00:18:40.415 "state": "online", 00:18:40.415 "raid_level": "concat", 00:18:40.415 "superblock": false, 00:18:40.415 "num_base_bdevs": 4, 00:18:40.415 "num_base_bdevs_discovered": 4, 00:18:40.415 "num_base_bdevs_operational": 4, 00:18:40.415 "base_bdevs_list": [ 00:18:40.415 { 00:18:40.415 "name": "BaseBdev1", 00:18:40.415 "uuid": "2a52da13-061e-4304-85a3-30dff25d7333", 00:18:40.415 "is_configured": true, 00:18:40.415 "data_offset": 0, 00:18:40.415 "data_size": 65536 00:18:40.415 }, 00:18:40.415 { 00:18:40.415 "name": "BaseBdev2", 00:18:40.415 "uuid": "64569843-e046-4895-96cf-52d645158b02", 00:18:40.415 "is_configured": true, 00:18:40.415 "data_offset": 0, 00:18:40.415 "data_size": 65536 00:18:40.415 }, 00:18:40.415 { 00:18:40.415 "name": "BaseBdev3", 00:18:40.415 "uuid": "11d8e7dd-25d6-48fc-86a1-815459d71b21", 00:18:40.415 "is_configured": true, 00:18:40.415 "data_offset": 0, 00:18:40.415 "data_size": 65536 00:18:40.415 }, 00:18:40.415 { 00:18:40.415 "name": "BaseBdev4", 00:18:40.415 "uuid": "08544175-c278-4d8a-afb1-badd88bb9924", 00:18:40.415 "is_configured": true, 00:18:40.415 "data_offset": 0, 00:18:40.415 "data_size": 65536 00:18:40.415 } 00:18:40.415 ] 00:18:40.415 }' 00:18:40.415 05:37:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:40.415 05:37:44 -- common/autotest_common.sh@10 -- # set +x 00:18:41.033 05:37:44 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:41.303 [2024-10-07 05:37:45.124877] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:41.303 [2024-10-07 05:37:45.124914] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:41.303 [2024-10-07 05:37:45.124994] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:41.303 05:37:45 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:41.303 05:37:45 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:18:41.303 05:37:45 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:41.303 05:37:45 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:41.303 05:37:45 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:41.303 05:37:45 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:18:41.303 05:37:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:41.303 05:37:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:41.303 05:37:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:41.303 05:37:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:41.303 05:37:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:41.303 05:37:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:41.303 05:37:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:41.303 05:37:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:41.303 05:37:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:41.303 05:37:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.303 05:37:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.562 05:37:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:41.562 "name": "Existed_Raid", 00:18:41.562 "uuid": "bf604c18-8e17-45cb-8ee6-c2633fefc5ed", 00:18:41.562 "strip_size_kb": 64, 00:18:41.562 "state": "offline", 00:18:41.562 "raid_level": "concat", 00:18:41.562 "superblock": false, 00:18:41.562 "num_base_bdevs": 4, 00:18:41.562 "num_base_bdevs_discovered": 3, 00:18:41.562 "num_base_bdevs_operational": 3, 00:18:41.562 "base_bdevs_list": [ 00:18:41.562 { 00:18:41.562 "name": null, 00:18:41.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.562 "is_configured": false, 00:18:41.562 "data_offset": 0, 00:18:41.562 "data_size": 65536 00:18:41.562 }, 00:18:41.562 { 00:18:41.562 "name": "BaseBdev2", 00:18:41.562 "uuid": "64569843-e046-4895-96cf-52d645158b02", 00:18:41.562 "is_configured": true, 00:18:41.562 "data_offset": 0, 00:18:41.562 "data_size": 65536 00:18:41.562 }, 00:18:41.562 { 00:18:41.562 "name": "BaseBdev3", 00:18:41.562 "uuid": "11d8e7dd-25d6-48fc-86a1-815459d71b21", 00:18:41.562 "is_configured": true, 00:18:41.562 "data_offset": 0, 00:18:41.562 "data_size": 65536 00:18:41.562 }, 00:18:41.562 { 00:18:41.562 "name": "BaseBdev4", 00:18:41.562 "uuid": "08544175-c278-4d8a-afb1-badd88bb9924", 00:18:41.562 "is_configured": true, 00:18:41.562 "data_offset": 0, 00:18:41.562 "data_size": 65536 00:18:41.562 } 00:18:41.562 ] 00:18:41.562 }' 00:18:41.562 05:37:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:41.562 05:37:45 -- common/autotest_common.sh@10 -- # set +x 00:18:42.497 05:37:46 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:42.497 05:37:46 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:42.497 05:37:46 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:42.497 05:37:46 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.497 05:37:46 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:42.497 05:37:46 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:42.497 05:37:46 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:42.755 [2024-10-07 05:37:46.540131] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:42.755 05:37:46 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:42.755 05:37:46 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:42.755 05:37:46 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.755 05:37:46 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:43.013 05:37:46 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:43.013 05:37:46 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:43.013 05:37:46 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:43.271 [2024-10-07 05:37:47.014857] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:43.271 05:37:47 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:43.271 05:37:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:43.271 05:37:47 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.271 05:37:47 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:43.530 05:37:47 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:43.530 05:37:47 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:43.530 05:37:47 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:43.788 [2024-10-07 05:37:47.542189] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:43.788 [2024-10-07 05:37:47.542245] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:18:43.788 05:37:47 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:43.788 05:37:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:43.788 05:37:47 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.788 05:37:47 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:44.047 05:37:47 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:44.047 05:37:47 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:44.047 05:37:47 -- bdev/bdev_raid.sh@287 -- # killprocess 152146 00:18:44.047 05:37:47 -- common/autotest_common.sh@926 -- # '[' -z 152146 ']' 00:18:44.047 05:37:47 -- common/autotest_common.sh@930 -- # kill -0 152146 00:18:44.047 05:37:47 -- common/autotest_common.sh@931 -- # uname 00:18:44.047 05:37:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:44.047 05:37:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 152146 00:18:44.047 killing process with pid 152146 00:18:44.047 05:37:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:44.047 05:37:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:44.047 05:37:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 152146' 00:18:44.047 05:37:47 -- common/autotest_common.sh@945 -- # kill 152146 00:18:44.047 [2024-10-07 05:37:47.855238] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:44.047 05:37:47 -- common/autotest_common.sh@950 -- # wait 152146 00:18:44.047 [2024-10-07 05:37:47.855337] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:44.981 ************************************ 00:18:44.981 END TEST raid_state_function_test 00:18:44.981 ************************************ 00:18:44.981 05:37:48 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:44.981 00:18:44.981 real 0m14.344s 00:18:44.981 user 0m25.497s 00:18:44.981 sys 0m1.907s 00:18:44.981 05:37:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:44.981 05:37:48 -- common/autotest_common.sh@10 -- # set +x 00:18:44.981 05:37:48 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:18:44.981 05:37:48 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:44.982 05:37:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:44.982 05:37:48 -- common/autotest_common.sh@10 -- # set +x 00:18:44.982 ************************************ 00:18:44.982 START TEST raid_state_function_test_sb 00:18:44.982 ************************************ 00:18:44.982 05:37:48 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 true 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@226 -- # raid_pid=153068 00:18:44.982 Process raid pid: 153068 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 153068' 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@228 -- # waitforlisten 153068 /var/tmp/spdk-raid.sock 00:18:44.982 05:37:48 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:44.982 05:37:48 -- common/autotest_common.sh@819 -- # '[' -z 153068 ']' 00:18:44.982 05:37:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:44.982 05:37:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:44.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:44.982 05:37:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:44.982 05:37:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:44.982 05:37:48 -- common/autotest_common.sh@10 -- # set +x 00:18:44.982 [2024-10-07 05:37:48.900028] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:44.982 [2024-10-07 05:37:48.900884] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.241 [2024-10-07 05:37:49.068257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.500 [2024-10-07 05:37:49.227021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.500 [2024-10-07 05:37:49.394286] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:46.067 05:37:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:46.067 05:37:49 -- common/autotest_common.sh@852 -- # return 0 00:18:46.067 05:37:49 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:46.067 [2024-10-07 05:37:50.013760] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:46.067 [2024-10-07 05:37:50.014238] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:46.067 [2024-10-07 05:37:50.014269] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:46.067 [2024-10-07 05:37:50.014407] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:46.067 [2024-10-07 05:37:50.014435] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:46.067 [2024-10-07 05:37:50.014622] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:46.067 [2024-10-07 05:37:50.014651] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:46.067 [2024-10-07 05:37:50.014792] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:46.067 05:37:50 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:46.067 05:37:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:46.067 05:37:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:46.067 05:37:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:46.067 05:37:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:46.067 05:37:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:46.067 05:37:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:46.067 05:37:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:46.067 05:37:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:46.067 05:37:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:46.067 05:37:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.067 05:37:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.326 05:37:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:46.326 "name": "Existed_Raid", 00:18:46.326 "uuid": "a4f1181a-b4c7-4169-80cf-ada529c4d5a9", 00:18:46.326 "strip_size_kb": 64, 00:18:46.326 "state": "configuring", 00:18:46.326 "raid_level": "concat", 00:18:46.326 "superblock": true, 00:18:46.326 "num_base_bdevs": 4, 00:18:46.326 "num_base_bdevs_discovered": 0, 00:18:46.326 "num_base_bdevs_operational": 4, 00:18:46.326 "base_bdevs_list": [ 00:18:46.326 { 00:18:46.326 "name": "BaseBdev1", 00:18:46.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.326 "is_configured": false, 00:18:46.326 "data_offset": 0, 00:18:46.326 "data_size": 0 00:18:46.326 }, 00:18:46.326 { 00:18:46.326 "name": "BaseBdev2", 00:18:46.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.326 "is_configured": false, 00:18:46.326 "data_offset": 0, 00:18:46.326 "data_size": 0 00:18:46.326 }, 00:18:46.326 { 00:18:46.326 "name": "BaseBdev3", 00:18:46.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.326 "is_configured": false, 00:18:46.326 "data_offset": 0, 00:18:46.326 "data_size": 0 00:18:46.326 }, 00:18:46.326 { 00:18:46.326 "name": "BaseBdev4", 00:18:46.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.326 "is_configured": false, 00:18:46.326 "data_offset": 0, 00:18:46.326 "data_size": 0 00:18:46.326 } 00:18:46.326 ] 00:18:46.326 }' 00:18:46.326 05:37:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:46.326 05:37:50 -- common/autotest_common.sh@10 -- # set +x 00:18:47.261 05:37:50 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:47.261 [2024-10-07 05:37:51.177755] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:47.261 [2024-10-07 05:37:51.177800] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:18:47.261 05:37:51 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:47.520 [2024-10-07 05:37:51.369830] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:47.520 [2024-10-07 05:37:51.369893] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:47.520 [2024-10-07 05:37:51.369907] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:47.520 [2024-10-07 05:37:51.369935] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:47.520 [2024-10-07 05:37:51.369945] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:47.520 [2024-10-07 05:37:51.369982] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:47.520 [2024-10-07 05:37:51.369991] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:47.520 [2024-10-07 05:37:51.370017] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:47.520 05:37:51 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:47.778 [2024-10-07 05:37:51.587788] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:47.778 BaseBdev1 00:18:47.778 05:37:51 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:47.778 05:37:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:47.778 05:37:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:47.778 05:37:51 -- common/autotest_common.sh@889 -- # local i 00:18:47.778 05:37:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:47.778 05:37:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:47.778 05:37:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:48.038 05:37:51 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:48.297 [ 00:18:48.297 { 00:18:48.297 "name": "BaseBdev1", 00:18:48.297 "aliases": [ 00:18:48.297 "1f7525fc-83c5-4177-8c2a-7ac59f82bde5" 00:18:48.297 ], 00:18:48.297 "product_name": "Malloc disk", 00:18:48.297 "block_size": 512, 00:18:48.297 "num_blocks": 65536, 00:18:48.297 "uuid": "1f7525fc-83c5-4177-8c2a-7ac59f82bde5", 00:18:48.297 "assigned_rate_limits": { 00:18:48.297 "rw_ios_per_sec": 0, 00:18:48.297 "rw_mbytes_per_sec": 0, 00:18:48.297 "r_mbytes_per_sec": 0, 00:18:48.297 "w_mbytes_per_sec": 0 00:18:48.297 }, 00:18:48.297 "claimed": true, 00:18:48.297 "claim_type": "exclusive_write", 00:18:48.297 "zoned": false, 00:18:48.297 "supported_io_types": { 00:18:48.297 "read": true, 00:18:48.297 "write": true, 00:18:48.297 "unmap": true, 00:18:48.297 "write_zeroes": true, 00:18:48.297 "flush": true, 00:18:48.297 "reset": true, 00:18:48.297 "compare": false, 00:18:48.297 "compare_and_write": false, 00:18:48.297 "abort": true, 00:18:48.297 "nvme_admin": false, 00:18:48.297 "nvme_io": false 00:18:48.297 }, 00:18:48.297 "memory_domains": [ 00:18:48.297 { 00:18:48.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:48.297 "dma_device_type": 2 00:18:48.297 } 00:18:48.297 ], 00:18:48.297 "driver_specific": {} 00:18:48.297 } 00:18:48.297 ] 00:18:48.297 05:37:52 -- common/autotest_common.sh@895 -- # return 0 00:18:48.297 05:37:52 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:48.297 05:37:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:48.297 05:37:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:48.297 05:37:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:48.297 05:37:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:48.297 05:37:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:48.297 05:37:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:48.297 05:37:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:48.297 05:37:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:48.298 05:37:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:48.298 05:37:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.298 05:37:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:48.556 05:37:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:48.557 "name": "Existed_Raid", 00:18:48.557 "uuid": "ab5751b9-9b97-42ec-861f-3aee2f3ee470", 00:18:48.557 "strip_size_kb": 64, 00:18:48.557 "state": "configuring", 00:18:48.557 "raid_level": "concat", 00:18:48.557 "superblock": true, 00:18:48.557 "num_base_bdevs": 4, 00:18:48.557 "num_base_bdevs_discovered": 1, 00:18:48.557 "num_base_bdevs_operational": 4, 00:18:48.557 "base_bdevs_list": [ 00:18:48.557 { 00:18:48.557 "name": "BaseBdev1", 00:18:48.557 "uuid": "1f7525fc-83c5-4177-8c2a-7ac59f82bde5", 00:18:48.557 "is_configured": true, 00:18:48.557 "data_offset": 2048, 00:18:48.557 "data_size": 63488 00:18:48.557 }, 00:18:48.557 { 00:18:48.557 "name": "BaseBdev2", 00:18:48.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.557 "is_configured": false, 00:18:48.557 "data_offset": 0, 00:18:48.557 "data_size": 0 00:18:48.557 }, 00:18:48.557 { 00:18:48.557 "name": "BaseBdev3", 00:18:48.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.557 "is_configured": false, 00:18:48.557 "data_offset": 0, 00:18:48.557 "data_size": 0 00:18:48.557 }, 00:18:48.557 { 00:18:48.557 "name": "BaseBdev4", 00:18:48.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.557 "is_configured": false, 00:18:48.557 "data_offset": 0, 00:18:48.557 "data_size": 0 00:18:48.557 } 00:18:48.557 ] 00:18:48.557 }' 00:18:48.557 05:37:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:48.557 05:37:52 -- common/autotest_common.sh@10 -- # set +x 00:18:49.124 05:37:52 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:49.124 [2024-10-07 05:37:53.024004] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:49.124 [2024-10-07 05:37:53.024046] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:49.124 05:37:53 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:49.124 05:37:53 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:49.382 05:37:53 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:49.641 BaseBdev1 00:18:49.641 05:37:53 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:49.641 05:37:53 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:49.641 05:37:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:49.641 05:37:53 -- common/autotest_common.sh@889 -- # local i 00:18:49.641 05:37:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:49.641 05:37:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:49.641 05:37:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:49.899 05:37:53 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:50.158 [ 00:18:50.158 { 00:18:50.158 "name": "BaseBdev1", 00:18:50.158 "aliases": [ 00:18:50.158 "e98b054a-55e7-4d18-89f0-7ae8451fa875" 00:18:50.158 ], 00:18:50.158 "product_name": "Malloc disk", 00:18:50.158 "block_size": 512, 00:18:50.158 "num_blocks": 65536, 00:18:50.158 "uuid": "e98b054a-55e7-4d18-89f0-7ae8451fa875", 00:18:50.158 "assigned_rate_limits": { 00:18:50.158 "rw_ios_per_sec": 0, 00:18:50.158 "rw_mbytes_per_sec": 0, 00:18:50.158 "r_mbytes_per_sec": 0, 00:18:50.158 "w_mbytes_per_sec": 0 00:18:50.158 }, 00:18:50.158 "claimed": false, 00:18:50.158 "zoned": false, 00:18:50.158 "supported_io_types": { 00:18:50.158 "read": true, 00:18:50.158 "write": true, 00:18:50.158 "unmap": true, 00:18:50.158 "write_zeroes": true, 00:18:50.158 "flush": true, 00:18:50.158 "reset": true, 00:18:50.158 "compare": false, 00:18:50.159 "compare_and_write": false, 00:18:50.159 "abort": true, 00:18:50.159 "nvme_admin": false, 00:18:50.159 "nvme_io": false 00:18:50.159 }, 00:18:50.159 "memory_domains": [ 00:18:50.159 { 00:18:50.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.159 "dma_device_type": 2 00:18:50.159 } 00:18:50.159 ], 00:18:50.159 "driver_specific": {} 00:18:50.159 } 00:18:50.159 ] 00:18:50.159 05:37:53 -- common/autotest_common.sh@895 -- # return 0 00:18:50.159 05:37:53 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:50.418 [2024-10-07 05:37:54.194786] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:50.418 [2024-10-07 05:37:54.196560] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:50.418 [2024-10-07 05:37:54.196633] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:50.418 [2024-10-07 05:37:54.196648] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:50.418 [2024-10-07 05:37:54.196677] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:50.418 [2024-10-07 05:37:54.196688] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:50.418 [2024-10-07 05:37:54.196706] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:50.418 05:37:54 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:50.418 05:37:54 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:50.418 05:37:54 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:50.418 05:37:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:50.418 05:37:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:50.418 05:37:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:50.418 05:37:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:50.418 05:37:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:50.418 05:37:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:50.418 05:37:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:50.418 05:37:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:50.418 05:37:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:50.418 05:37:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.418 05:37:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:50.684 05:37:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:50.684 "name": "Existed_Raid", 00:18:50.684 "uuid": "dab54eb6-a52c-4dc5-b2f5-c3a704300b20", 00:18:50.684 "strip_size_kb": 64, 00:18:50.684 "state": "configuring", 00:18:50.684 "raid_level": "concat", 00:18:50.684 "superblock": true, 00:18:50.684 "num_base_bdevs": 4, 00:18:50.684 "num_base_bdevs_discovered": 1, 00:18:50.684 "num_base_bdevs_operational": 4, 00:18:50.684 "base_bdevs_list": [ 00:18:50.684 { 00:18:50.684 "name": "BaseBdev1", 00:18:50.684 "uuid": "e98b054a-55e7-4d18-89f0-7ae8451fa875", 00:18:50.684 "is_configured": true, 00:18:50.684 "data_offset": 2048, 00:18:50.684 "data_size": 63488 00:18:50.684 }, 00:18:50.684 { 00:18:50.684 "name": "BaseBdev2", 00:18:50.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.684 "is_configured": false, 00:18:50.685 "data_offset": 0, 00:18:50.685 "data_size": 0 00:18:50.685 }, 00:18:50.685 { 00:18:50.685 "name": "BaseBdev3", 00:18:50.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.685 "is_configured": false, 00:18:50.685 "data_offset": 0, 00:18:50.685 "data_size": 0 00:18:50.685 }, 00:18:50.685 { 00:18:50.685 "name": "BaseBdev4", 00:18:50.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.685 "is_configured": false, 00:18:50.685 "data_offset": 0, 00:18:50.685 "data_size": 0 00:18:50.685 } 00:18:50.685 ] 00:18:50.685 }' 00:18:50.685 05:37:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:50.685 05:37:54 -- common/autotest_common.sh@10 -- # set +x 00:18:51.256 05:37:55 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:51.515 [2024-10-07 05:37:55.330683] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:51.515 BaseBdev2 00:18:51.515 05:37:55 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:51.515 05:37:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:51.515 05:37:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:51.515 05:37:55 -- common/autotest_common.sh@889 -- # local i 00:18:51.515 05:37:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:51.515 05:37:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:51.515 05:37:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:51.774 05:37:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:51.774 [ 00:18:51.774 { 00:18:51.774 "name": "BaseBdev2", 00:18:51.774 "aliases": [ 00:18:51.774 "8e635be1-0e8d-4072-8720-6413cf93cb4a" 00:18:51.774 ], 00:18:51.774 "product_name": "Malloc disk", 00:18:51.774 "block_size": 512, 00:18:51.774 "num_blocks": 65536, 00:18:51.774 "uuid": "8e635be1-0e8d-4072-8720-6413cf93cb4a", 00:18:51.774 "assigned_rate_limits": { 00:18:51.774 "rw_ios_per_sec": 0, 00:18:51.774 "rw_mbytes_per_sec": 0, 00:18:51.774 "r_mbytes_per_sec": 0, 00:18:51.774 "w_mbytes_per_sec": 0 00:18:51.774 }, 00:18:51.774 "claimed": true, 00:18:51.774 "claim_type": "exclusive_write", 00:18:51.774 "zoned": false, 00:18:51.774 "supported_io_types": { 00:18:51.774 "read": true, 00:18:51.774 "write": true, 00:18:51.774 "unmap": true, 00:18:51.774 "write_zeroes": true, 00:18:51.774 "flush": true, 00:18:51.774 "reset": true, 00:18:51.774 "compare": false, 00:18:51.774 "compare_and_write": false, 00:18:51.774 "abort": true, 00:18:51.774 "nvme_admin": false, 00:18:51.774 "nvme_io": false 00:18:51.774 }, 00:18:51.774 "memory_domains": [ 00:18:51.774 { 00:18:51.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.774 "dma_device_type": 2 00:18:51.774 } 00:18:51.774 ], 00:18:51.774 "driver_specific": {} 00:18:51.774 } 00:18:51.774 ] 00:18:51.774 05:37:55 -- common/autotest_common.sh@895 -- # return 0 00:18:51.774 05:37:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:51.774 05:37:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:51.774 05:37:55 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:51.774 05:37:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:51.774 05:37:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:51.774 05:37:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:51.774 05:37:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:51.774 05:37:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:51.774 05:37:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:51.774 05:37:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:51.774 05:37:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:51.774 05:37:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:51.774 05:37:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.774 05:37:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:52.034 05:37:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:52.034 "name": "Existed_Raid", 00:18:52.034 "uuid": "dab54eb6-a52c-4dc5-b2f5-c3a704300b20", 00:18:52.034 "strip_size_kb": 64, 00:18:52.034 "state": "configuring", 00:18:52.034 "raid_level": "concat", 00:18:52.034 "superblock": true, 00:18:52.034 "num_base_bdevs": 4, 00:18:52.034 "num_base_bdevs_discovered": 2, 00:18:52.034 "num_base_bdevs_operational": 4, 00:18:52.034 "base_bdevs_list": [ 00:18:52.034 { 00:18:52.034 "name": "BaseBdev1", 00:18:52.034 "uuid": "e98b054a-55e7-4d18-89f0-7ae8451fa875", 00:18:52.034 "is_configured": true, 00:18:52.034 "data_offset": 2048, 00:18:52.034 "data_size": 63488 00:18:52.034 }, 00:18:52.034 { 00:18:52.034 "name": "BaseBdev2", 00:18:52.034 "uuid": "8e635be1-0e8d-4072-8720-6413cf93cb4a", 00:18:52.034 "is_configured": true, 00:18:52.034 "data_offset": 2048, 00:18:52.034 "data_size": 63488 00:18:52.034 }, 00:18:52.034 { 00:18:52.034 "name": "BaseBdev3", 00:18:52.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.034 "is_configured": false, 00:18:52.034 "data_offset": 0, 00:18:52.034 "data_size": 0 00:18:52.034 }, 00:18:52.034 { 00:18:52.034 "name": "BaseBdev4", 00:18:52.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.034 "is_configured": false, 00:18:52.034 "data_offset": 0, 00:18:52.034 "data_size": 0 00:18:52.034 } 00:18:52.034 ] 00:18:52.034 }' 00:18:52.034 05:37:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:52.034 05:37:55 -- common/autotest_common.sh@10 -- # set +x 00:18:52.971 05:37:56 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:52.971 [2024-10-07 05:37:56.891368] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:52.971 BaseBdev3 00:18:52.971 05:37:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:52.971 05:37:56 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:52.971 05:37:56 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:52.971 05:37:56 -- common/autotest_common.sh@889 -- # local i 00:18:52.971 05:37:56 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:52.971 05:37:56 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:52.971 05:37:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:53.230 05:37:57 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:53.489 [ 00:18:53.489 { 00:18:53.489 "name": "BaseBdev3", 00:18:53.489 "aliases": [ 00:18:53.489 "bd048bd2-c85a-4692-a207-414735605286" 00:18:53.489 ], 00:18:53.489 "product_name": "Malloc disk", 00:18:53.489 "block_size": 512, 00:18:53.489 "num_blocks": 65536, 00:18:53.489 "uuid": "bd048bd2-c85a-4692-a207-414735605286", 00:18:53.489 "assigned_rate_limits": { 00:18:53.489 "rw_ios_per_sec": 0, 00:18:53.489 "rw_mbytes_per_sec": 0, 00:18:53.489 "r_mbytes_per_sec": 0, 00:18:53.489 "w_mbytes_per_sec": 0 00:18:53.489 }, 00:18:53.489 "claimed": true, 00:18:53.489 "claim_type": "exclusive_write", 00:18:53.489 "zoned": false, 00:18:53.489 "supported_io_types": { 00:18:53.489 "read": true, 00:18:53.489 "write": true, 00:18:53.489 "unmap": true, 00:18:53.489 "write_zeroes": true, 00:18:53.489 "flush": true, 00:18:53.489 "reset": true, 00:18:53.489 "compare": false, 00:18:53.489 "compare_and_write": false, 00:18:53.489 "abort": true, 00:18:53.489 "nvme_admin": false, 00:18:53.489 "nvme_io": false 00:18:53.489 }, 00:18:53.489 "memory_domains": [ 00:18:53.489 { 00:18:53.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:53.489 "dma_device_type": 2 00:18:53.489 } 00:18:53.489 ], 00:18:53.489 "driver_specific": {} 00:18:53.489 } 00:18:53.489 ] 00:18:53.489 05:37:57 -- common/autotest_common.sh@895 -- # return 0 00:18:53.489 05:37:57 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:53.489 05:37:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:53.489 05:37:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:53.489 05:37:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:53.489 05:37:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:53.489 05:37:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:53.489 05:37:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:53.489 05:37:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:53.489 05:37:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:53.489 05:37:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:53.489 05:37:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:53.489 05:37:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:53.489 05:37:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.489 05:37:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.748 05:37:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:53.748 "name": "Existed_Raid", 00:18:53.748 "uuid": "dab54eb6-a52c-4dc5-b2f5-c3a704300b20", 00:18:53.748 "strip_size_kb": 64, 00:18:53.748 "state": "configuring", 00:18:53.748 "raid_level": "concat", 00:18:53.748 "superblock": true, 00:18:53.748 "num_base_bdevs": 4, 00:18:53.748 "num_base_bdevs_discovered": 3, 00:18:53.748 "num_base_bdevs_operational": 4, 00:18:53.748 "base_bdevs_list": [ 00:18:53.748 { 00:18:53.748 "name": "BaseBdev1", 00:18:53.748 "uuid": "e98b054a-55e7-4d18-89f0-7ae8451fa875", 00:18:53.748 "is_configured": true, 00:18:53.748 "data_offset": 2048, 00:18:53.748 "data_size": 63488 00:18:53.748 }, 00:18:53.748 { 00:18:53.748 "name": "BaseBdev2", 00:18:53.748 "uuid": "8e635be1-0e8d-4072-8720-6413cf93cb4a", 00:18:53.748 "is_configured": true, 00:18:53.748 "data_offset": 2048, 00:18:53.748 "data_size": 63488 00:18:53.748 }, 00:18:53.748 { 00:18:53.748 "name": "BaseBdev3", 00:18:53.748 "uuid": "bd048bd2-c85a-4692-a207-414735605286", 00:18:53.748 "is_configured": true, 00:18:53.748 "data_offset": 2048, 00:18:53.748 "data_size": 63488 00:18:53.748 }, 00:18:53.748 { 00:18:53.748 "name": "BaseBdev4", 00:18:53.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.748 "is_configured": false, 00:18:53.748 "data_offset": 0, 00:18:53.748 "data_size": 0 00:18:53.748 } 00:18:53.748 ] 00:18:53.748 }' 00:18:53.748 05:37:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:53.748 05:37:57 -- common/autotest_common.sh@10 -- # set +x 00:18:54.681 05:37:58 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:54.681 [2024-10-07 05:37:58.607642] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:54.681 [2024-10-07 05:37:58.607881] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:18:54.681 [2024-10-07 05:37:58.607895] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:54.681 [2024-10-07 05:37:58.608085] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:18:54.681 [2024-10-07 05:37:58.608593] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:18:54.681 [2024-10-07 05:37:58.608617] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:18:54.681 [2024-10-07 05:37:58.608794] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.681 BaseBdev4 00:18:54.681 05:37:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:54.681 05:37:58 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:54.681 05:37:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:54.681 05:37:58 -- common/autotest_common.sh@889 -- # local i 00:18:54.681 05:37:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:54.681 05:37:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:54.681 05:37:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:54.939 05:37:58 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:55.197 [ 00:18:55.197 { 00:18:55.197 "name": "BaseBdev4", 00:18:55.197 "aliases": [ 00:18:55.197 "268fcee7-098d-45bd-8cb1-c7771565fdc6" 00:18:55.197 ], 00:18:55.197 "product_name": "Malloc disk", 00:18:55.197 "block_size": 512, 00:18:55.197 "num_blocks": 65536, 00:18:55.197 "uuid": "268fcee7-098d-45bd-8cb1-c7771565fdc6", 00:18:55.197 "assigned_rate_limits": { 00:18:55.197 "rw_ios_per_sec": 0, 00:18:55.197 "rw_mbytes_per_sec": 0, 00:18:55.197 "r_mbytes_per_sec": 0, 00:18:55.197 "w_mbytes_per_sec": 0 00:18:55.197 }, 00:18:55.197 "claimed": true, 00:18:55.197 "claim_type": "exclusive_write", 00:18:55.197 "zoned": false, 00:18:55.197 "supported_io_types": { 00:18:55.197 "read": true, 00:18:55.197 "write": true, 00:18:55.197 "unmap": true, 00:18:55.197 "write_zeroes": true, 00:18:55.197 "flush": true, 00:18:55.197 "reset": true, 00:18:55.197 "compare": false, 00:18:55.197 "compare_and_write": false, 00:18:55.197 "abort": true, 00:18:55.197 "nvme_admin": false, 00:18:55.197 "nvme_io": false 00:18:55.197 }, 00:18:55.197 "memory_domains": [ 00:18:55.197 { 00:18:55.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.197 "dma_device_type": 2 00:18:55.197 } 00:18:55.197 ], 00:18:55.197 "driver_specific": {} 00:18:55.197 } 00:18:55.197 ] 00:18:55.197 05:37:59 -- common/autotest_common.sh@895 -- # return 0 00:18:55.197 05:37:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:55.197 05:37:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:55.197 05:37:59 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:55.197 05:37:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:55.197 05:37:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:55.197 05:37:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:55.197 05:37:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:55.197 05:37:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:55.197 05:37:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:55.197 05:37:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:55.197 05:37:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:55.197 05:37:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:55.197 05:37:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.197 05:37:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.456 05:37:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:55.456 "name": "Existed_Raid", 00:18:55.456 "uuid": "dab54eb6-a52c-4dc5-b2f5-c3a704300b20", 00:18:55.456 "strip_size_kb": 64, 00:18:55.456 "state": "online", 00:18:55.456 "raid_level": "concat", 00:18:55.456 "superblock": true, 00:18:55.456 "num_base_bdevs": 4, 00:18:55.456 "num_base_bdevs_discovered": 4, 00:18:55.456 "num_base_bdevs_operational": 4, 00:18:55.456 "base_bdevs_list": [ 00:18:55.456 { 00:18:55.456 "name": "BaseBdev1", 00:18:55.456 "uuid": "e98b054a-55e7-4d18-89f0-7ae8451fa875", 00:18:55.456 "is_configured": true, 00:18:55.456 "data_offset": 2048, 00:18:55.456 "data_size": 63488 00:18:55.456 }, 00:18:55.456 { 00:18:55.456 "name": "BaseBdev2", 00:18:55.456 "uuid": "8e635be1-0e8d-4072-8720-6413cf93cb4a", 00:18:55.456 "is_configured": true, 00:18:55.456 "data_offset": 2048, 00:18:55.456 "data_size": 63488 00:18:55.456 }, 00:18:55.456 { 00:18:55.456 "name": "BaseBdev3", 00:18:55.456 "uuid": "bd048bd2-c85a-4692-a207-414735605286", 00:18:55.456 "is_configured": true, 00:18:55.456 "data_offset": 2048, 00:18:55.456 "data_size": 63488 00:18:55.456 }, 00:18:55.456 { 00:18:55.456 "name": "BaseBdev4", 00:18:55.456 "uuid": "268fcee7-098d-45bd-8cb1-c7771565fdc6", 00:18:55.456 "is_configured": true, 00:18:55.456 "data_offset": 2048, 00:18:55.456 "data_size": 63488 00:18:55.456 } 00:18:55.456 ] 00:18:55.456 }' 00:18:55.456 05:37:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:55.456 05:37:59 -- common/autotest_common.sh@10 -- # set +x 00:18:56.023 05:37:59 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:56.282 [2024-10-07 05:38:00.166949] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:56.282 [2024-10-07 05:38:00.166983] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:56.282 [2024-10-07 05:38:00.167035] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:56.282 05:38:00 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:56.282 05:38:00 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:18:56.282 05:38:00 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:56.282 05:38:00 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:56.282 05:38:00 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:56.282 05:38:00 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:18:56.282 05:38:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:56.282 05:38:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:56.282 05:38:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:56.282 05:38:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:56.282 05:38:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:56.282 05:38:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:56.282 05:38:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:56.282 05:38:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:56.282 05:38:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:56.282 05:38:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.282 05:38:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.540 05:38:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:56.540 "name": "Existed_Raid", 00:18:56.540 "uuid": "dab54eb6-a52c-4dc5-b2f5-c3a704300b20", 00:18:56.540 "strip_size_kb": 64, 00:18:56.540 "state": "offline", 00:18:56.540 "raid_level": "concat", 00:18:56.540 "superblock": true, 00:18:56.540 "num_base_bdevs": 4, 00:18:56.540 "num_base_bdevs_discovered": 3, 00:18:56.540 "num_base_bdevs_operational": 3, 00:18:56.540 "base_bdevs_list": [ 00:18:56.540 { 00:18:56.540 "name": null, 00:18:56.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.540 "is_configured": false, 00:18:56.540 "data_offset": 2048, 00:18:56.540 "data_size": 63488 00:18:56.540 }, 00:18:56.540 { 00:18:56.540 "name": "BaseBdev2", 00:18:56.540 "uuid": "8e635be1-0e8d-4072-8720-6413cf93cb4a", 00:18:56.540 "is_configured": true, 00:18:56.540 "data_offset": 2048, 00:18:56.540 "data_size": 63488 00:18:56.540 }, 00:18:56.540 { 00:18:56.540 "name": "BaseBdev3", 00:18:56.540 "uuid": "bd048bd2-c85a-4692-a207-414735605286", 00:18:56.540 "is_configured": true, 00:18:56.540 "data_offset": 2048, 00:18:56.540 "data_size": 63488 00:18:56.540 }, 00:18:56.540 { 00:18:56.540 "name": "BaseBdev4", 00:18:56.540 "uuid": "268fcee7-098d-45bd-8cb1-c7771565fdc6", 00:18:56.540 "is_configured": true, 00:18:56.540 "data_offset": 2048, 00:18:56.540 "data_size": 63488 00:18:56.540 } 00:18:56.540 ] 00:18:56.540 }' 00:18:56.540 05:38:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:56.540 05:38:00 -- common/autotest_common.sh@10 -- # set +x 00:18:57.105 05:38:01 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:57.105 05:38:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:57.105 05:38:01 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.105 05:38:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:57.363 05:38:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:57.363 05:38:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:57.363 05:38:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:57.622 [2024-10-07 05:38:01.432240] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:57.622 05:38:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:57.622 05:38:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:57.622 05:38:01 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.622 05:38:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:57.885 05:38:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:57.885 05:38:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:57.885 05:38:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:58.166 [2024-10-07 05:38:01.916671] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:58.166 05:38:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:58.166 05:38:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:58.166 05:38:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:58.166 05:38:01 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.438 05:38:02 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:58.438 05:38:02 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:58.438 05:38:02 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:58.697 [2024-10-07 05:38:02.423952] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:58.697 [2024-10-07 05:38:02.424005] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:18:58.697 05:38:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:58.697 05:38:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:58.697 05:38:02 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.697 05:38:02 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:58.955 05:38:02 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:58.955 05:38:02 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:58.955 05:38:02 -- bdev/bdev_raid.sh@287 -- # killprocess 153068 00:18:58.955 05:38:02 -- common/autotest_common.sh@926 -- # '[' -z 153068 ']' 00:18:58.955 05:38:02 -- common/autotest_common.sh@930 -- # kill -0 153068 00:18:58.955 05:38:02 -- common/autotest_common.sh@931 -- # uname 00:18:58.955 05:38:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:58.955 05:38:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 153068 00:18:58.955 05:38:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:58.955 05:38:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:58.955 05:38:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 153068' 00:18:58.955 killing process with pid 153068 00:18:58.955 05:38:02 -- common/autotest_common.sh@945 -- # kill 153068 00:18:58.955 [2024-10-07 05:38:02.737644] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:58.955 [2024-10-07 05:38:02.737744] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:58.955 05:38:02 -- common/autotest_common.sh@950 -- # wait 153068 00:18:59.890 ************************************ 00:18:59.890 END TEST raid_state_function_test_sb 00:18:59.890 ************************************ 00:18:59.890 05:38:03 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:59.890 00:18:59.890 real 0m14.820s 00:18:59.890 user 0m26.301s 00:18:59.890 sys 0m1.920s 00:18:59.890 05:38:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:59.890 05:38:03 -- common/autotest_common.sh@10 -- # set +x 00:18:59.890 05:38:03 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:18:59.890 05:38:03 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:59.890 05:38:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:59.891 05:38:03 -- common/autotest_common.sh@10 -- # set +x 00:18:59.891 ************************************ 00:18:59.891 START TEST raid_superblock_test 00:18:59.891 ************************************ 00:18:59.891 05:38:03 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 4 00:18:59.891 05:38:03 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:18:59.891 05:38:03 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:18:59.891 05:38:03 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:59.891 05:38:03 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:59.891 05:38:03 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:59.891 05:38:03 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:59.891 05:38:03 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:59.891 05:38:03 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:59.891 05:38:03 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:59.891 05:38:03 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:59.891 05:38:03 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:59.891 05:38:03 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:59.891 05:38:03 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:59.891 05:38:03 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:18:59.891 05:38:03 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:18:59.891 05:38:03 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:18:59.891 05:38:03 -- bdev/bdev_raid.sh@357 -- # raid_pid=153982 00:18:59.891 05:38:03 -- bdev/bdev_raid.sh@358 -- # waitforlisten 153982 /var/tmp/spdk-raid.sock 00:18:59.891 05:38:03 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:59.891 05:38:03 -- common/autotest_common.sh@819 -- # '[' -z 153982 ']' 00:18:59.891 05:38:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:59.891 05:38:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:59.891 05:38:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:59.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:59.891 05:38:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:59.891 05:38:03 -- common/autotest_common.sh@10 -- # set +x 00:18:59.891 [2024-10-07 05:38:03.782164] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:59.891 [2024-10-07 05:38:03.783179] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153982 ] 00:19:00.150 [2024-10-07 05:38:03.951523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.408 [2024-10-07 05:38:04.134071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.408 [2024-10-07 05:38:04.303012] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:00.668 05:38:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:00.668 05:38:04 -- common/autotest_common.sh@852 -- # return 0 00:19:00.668 05:38:04 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:19:00.668 05:38:04 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:00.668 05:38:04 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:19:00.668 05:38:04 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:19:00.668 05:38:04 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:00.668 05:38:04 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:00.668 05:38:04 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:00.668 05:38:04 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:00.668 05:38:04 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:00.926 malloc1 00:19:00.926 05:38:04 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:01.185 [2024-10-07 05:38:05.084518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:01.185 [2024-10-07 05:38:05.084604] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.185 [2024-10-07 05:38:05.084639] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:19:01.185 [2024-10-07 05:38:05.084696] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.185 [2024-10-07 05:38:05.086636] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.185 [2024-10-07 05:38:05.086688] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:01.185 pt1 00:19:01.185 05:38:05 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:01.185 05:38:05 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:01.185 05:38:05 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:19:01.185 05:38:05 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:19:01.185 05:38:05 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:01.185 05:38:05 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:01.185 05:38:05 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:01.185 05:38:05 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:01.185 05:38:05 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:01.444 malloc2 00:19:01.703 05:38:05 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:01.703 [2024-10-07 05:38:05.659128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:01.703 [2024-10-07 05:38:05.659202] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.703 [2024-10-07 05:38:05.659248] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:01.703 [2024-10-07 05:38:05.659309] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.703 [2024-10-07 05:38:05.661462] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.703 [2024-10-07 05:38:05.661516] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:01.703 pt2 00:19:01.703 05:38:05 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:01.703 05:38:05 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:01.703 05:38:05 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:19:01.703 05:38:05 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:19:01.703 05:38:05 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:01.703 05:38:05 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:01.703 05:38:05 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:01.703 05:38:05 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:01.703 05:38:05 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:02.270 malloc3 00:19:02.270 05:38:05 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:02.270 [2024-10-07 05:38:06.173281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:02.270 [2024-10-07 05:38:06.173378] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.270 [2024-10-07 05:38:06.173432] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:19:02.270 [2024-10-07 05:38:06.173481] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.270 [2024-10-07 05:38:06.175675] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.270 [2024-10-07 05:38:06.175733] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:02.270 pt3 00:19:02.270 05:38:06 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:02.270 05:38:06 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:02.270 05:38:06 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:19:02.270 05:38:06 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:19:02.270 05:38:06 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:02.270 05:38:06 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:02.270 05:38:06 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:02.270 05:38:06 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:02.270 05:38:06 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:19:02.529 malloc4 00:19:02.529 05:38:06 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:02.786 [2024-10-07 05:38:06.746460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:02.786 [2024-10-07 05:38:06.746550] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.786 [2024-10-07 05:38:06.746588] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:19:02.786 [2024-10-07 05:38:06.746635] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.786 [2024-10-07 05:38:06.748786] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.786 [2024-10-07 05:38:06.748843] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:02.786 pt4 00:19:03.045 05:38:06 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:03.045 05:38:06 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:03.045 05:38:06 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:19:03.045 [2024-10-07 05:38:07.022607] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:03.303 [2024-10-07 05:38:07.024509] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:03.303 [2024-10-07 05:38:07.024597] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:03.303 [2024-10-07 05:38:07.024687] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:03.303 [2024-10-07 05:38:07.024906] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:19:03.303 [2024-10-07 05:38:07.024922] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:03.303 [2024-10-07 05:38:07.025049] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:19:03.303 [2024-10-07 05:38:07.025411] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:19:03.303 [2024-10-07 05:38:07.025436] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:19:03.303 [2024-10-07 05:38:07.025590] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.303 05:38:07 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:03.303 05:38:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:03.303 05:38:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:03.303 05:38:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:03.303 05:38:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:03.303 05:38:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:03.303 05:38:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:03.303 05:38:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:03.303 05:38:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:03.303 05:38:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:03.303 05:38:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.303 05:38:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.562 05:38:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:03.562 "name": "raid_bdev1", 00:19:03.562 "uuid": "6be9ed6d-e533-494c-baf0-50d434aebe35", 00:19:03.562 "strip_size_kb": 64, 00:19:03.562 "state": "online", 00:19:03.562 "raid_level": "concat", 00:19:03.562 "superblock": true, 00:19:03.562 "num_base_bdevs": 4, 00:19:03.562 "num_base_bdevs_discovered": 4, 00:19:03.562 "num_base_bdevs_operational": 4, 00:19:03.562 "base_bdevs_list": [ 00:19:03.562 { 00:19:03.562 "name": "pt1", 00:19:03.562 "uuid": "0f8fc738-7f19-559e-b1ea-733226c22273", 00:19:03.562 "is_configured": true, 00:19:03.562 "data_offset": 2048, 00:19:03.562 "data_size": 63488 00:19:03.562 }, 00:19:03.562 { 00:19:03.562 "name": "pt2", 00:19:03.562 "uuid": "d5efb21c-c41a-5d50-bc78-d17f1c68f27a", 00:19:03.562 "is_configured": true, 00:19:03.562 "data_offset": 2048, 00:19:03.562 "data_size": 63488 00:19:03.562 }, 00:19:03.562 { 00:19:03.562 "name": "pt3", 00:19:03.562 "uuid": "46c8ce3b-4ce1-5ec6-a1c2-40ee03ef8d12", 00:19:03.562 "is_configured": true, 00:19:03.562 "data_offset": 2048, 00:19:03.562 "data_size": 63488 00:19:03.562 }, 00:19:03.562 { 00:19:03.562 "name": "pt4", 00:19:03.562 "uuid": "84b80c1e-8ea1-5a82-878f-3ac23b10fd53", 00:19:03.562 "is_configured": true, 00:19:03.562 "data_offset": 2048, 00:19:03.562 "data_size": 63488 00:19:03.562 } 00:19:03.562 ] 00:19:03.562 }' 00:19:03.562 05:38:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:03.562 05:38:07 -- common/autotest_common.sh@10 -- # set +x 00:19:04.129 05:38:07 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:04.129 05:38:07 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:19:04.389 [2024-10-07 05:38:08.178896] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:04.389 05:38:08 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=6be9ed6d-e533-494c-baf0-50d434aebe35 00:19:04.389 05:38:08 -- bdev/bdev_raid.sh@380 -- # '[' -z 6be9ed6d-e533-494c-baf0-50d434aebe35 ']' 00:19:04.389 05:38:08 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:04.647 [2024-10-07 05:38:08.450759] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:04.647 [2024-10-07 05:38:08.450786] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:04.647 [2024-10-07 05:38:08.450870] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:04.647 [2024-10-07 05:38:08.450941] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:04.647 [2024-10-07 05:38:08.450956] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:19:04.647 05:38:08 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.647 05:38:08 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:19:04.906 05:38:08 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:19:04.906 05:38:08 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:19:04.906 05:38:08 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:04.906 05:38:08 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:05.165 05:38:08 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:05.165 05:38:08 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:05.165 05:38:09 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:05.165 05:38:09 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:05.424 05:38:09 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:05.424 05:38:09 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:05.682 05:38:09 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:05.682 05:38:09 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:05.941 05:38:09 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:19:05.941 05:38:09 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:05.941 05:38:09 -- common/autotest_common.sh@640 -- # local es=0 00:19:05.941 05:38:09 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:05.941 05:38:09 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:05.941 05:38:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:05.941 05:38:09 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:05.941 05:38:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:05.941 05:38:09 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:05.942 05:38:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:05.942 05:38:09 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:05.942 05:38:09 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:05.942 05:38:09 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:06.200 [2024-10-07 05:38:09.978932] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:06.200 [2024-10-07 05:38:09.980808] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:06.200 [2024-10-07 05:38:09.980868] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:06.200 [2024-10-07 05:38:09.980922] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:06.200 [2024-10-07 05:38:09.980976] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:19:06.200 [2024-10-07 05:38:09.981063] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:19:06.200 [2024-10-07 05:38:09.981140] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:19:06.200 [2024-10-07 05:38:09.981212] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:19:06.200 [2024-10-07 05:38:09.981246] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:06.200 [2024-10-07 05:38:09.981258] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring 00:19:06.200 request: 00:19:06.200 { 00:19:06.200 "name": "raid_bdev1", 00:19:06.200 "raid_level": "concat", 00:19:06.200 "base_bdevs": [ 00:19:06.200 "malloc1", 00:19:06.200 "malloc2", 00:19:06.200 "malloc3", 00:19:06.200 "malloc4" 00:19:06.200 ], 00:19:06.200 "superblock": false, 00:19:06.200 "strip_size_kb": 64, 00:19:06.200 "method": "bdev_raid_create", 00:19:06.200 "req_id": 1 00:19:06.200 } 00:19:06.200 Got JSON-RPC error response 00:19:06.200 response: 00:19:06.200 { 00:19:06.200 "code": -17, 00:19:06.200 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:06.200 } 00:19:06.200 05:38:09 -- common/autotest_common.sh@643 -- # es=1 00:19:06.200 05:38:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:06.200 05:38:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:06.200 05:38:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:06.200 05:38:09 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:19:06.200 05:38:09 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.458 05:38:10 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:19:06.458 05:38:10 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:19:06.458 05:38:10 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:06.458 [2024-10-07 05:38:10.363008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:06.458 [2024-10-07 05:38:10.363083] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.458 [2024-10-07 05:38:10.363124] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:06.458 [2024-10-07 05:38:10.363151] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.458 [2024-10-07 05:38:10.365087] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.458 [2024-10-07 05:38:10.365160] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:06.458 [2024-10-07 05:38:10.365255] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:06.458 [2024-10-07 05:38:10.365318] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:06.458 pt1 00:19:06.458 05:38:10 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:19:06.458 05:38:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:06.458 05:38:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:06.458 05:38:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:06.458 05:38:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:06.458 05:38:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:06.458 05:38:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:06.458 05:38:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:06.458 05:38:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:06.458 05:38:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:06.458 05:38:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.458 05:38:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.717 05:38:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:06.717 "name": "raid_bdev1", 00:19:06.717 "uuid": "6be9ed6d-e533-494c-baf0-50d434aebe35", 00:19:06.717 "strip_size_kb": 64, 00:19:06.717 "state": "configuring", 00:19:06.717 "raid_level": "concat", 00:19:06.717 "superblock": true, 00:19:06.717 "num_base_bdevs": 4, 00:19:06.717 "num_base_bdevs_discovered": 1, 00:19:06.717 "num_base_bdevs_operational": 4, 00:19:06.717 "base_bdevs_list": [ 00:19:06.717 { 00:19:06.717 "name": "pt1", 00:19:06.717 "uuid": "0f8fc738-7f19-559e-b1ea-733226c22273", 00:19:06.717 "is_configured": true, 00:19:06.717 "data_offset": 2048, 00:19:06.717 "data_size": 63488 00:19:06.717 }, 00:19:06.717 { 00:19:06.717 "name": null, 00:19:06.717 "uuid": "d5efb21c-c41a-5d50-bc78-d17f1c68f27a", 00:19:06.717 "is_configured": false, 00:19:06.717 "data_offset": 2048, 00:19:06.717 "data_size": 63488 00:19:06.717 }, 00:19:06.717 { 00:19:06.717 "name": null, 00:19:06.717 "uuid": "46c8ce3b-4ce1-5ec6-a1c2-40ee03ef8d12", 00:19:06.717 "is_configured": false, 00:19:06.717 "data_offset": 2048, 00:19:06.717 "data_size": 63488 00:19:06.717 }, 00:19:06.717 { 00:19:06.717 "name": null, 00:19:06.717 "uuid": "84b80c1e-8ea1-5a82-878f-3ac23b10fd53", 00:19:06.717 "is_configured": false, 00:19:06.717 "data_offset": 2048, 00:19:06.717 "data_size": 63488 00:19:06.717 } 00:19:06.717 ] 00:19:06.717 }' 00:19:06.717 05:38:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:06.717 05:38:10 -- common/autotest_common.sh@10 -- # set +x 00:19:07.284 05:38:11 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:19:07.284 05:38:11 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:07.542 [2024-10-07 05:38:11.471312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:07.542 [2024-10-07 05:38:11.471401] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.542 [2024-10-07 05:38:11.471449] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:07.543 [2024-10-07 05:38:11.471475] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.543 [2024-10-07 05:38:11.471972] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.543 [2024-10-07 05:38:11.472054] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:07.543 [2024-10-07 05:38:11.472166] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:07.543 [2024-10-07 05:38:11.472200] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:07.543 pt2 00:19:07.543 05:38:11 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:07.801 [2024-10-07 05:38:11.735305] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:07.801 05:38:11 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:19:07.801 05:38:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:07.801 05:38:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:07.801 05:38:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:07.801 05:38:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:07.801 05:38:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:07.801 05:38:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:07.801 05:38:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:07.801 05:38:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:07.801 05:38:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:07.801 05:38:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.801 05:38:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.060 05:38:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:08.060 "name": "raid_bdev1", 00:19:08.060 "uuid": "6be9ed6d-e533-494c-baf0-50d434aebe35", 00:19:08.060 "strip_size_kb": 64, 00:19:08.060 "state": "configuring", 00:19:08.060 "raid_level": "concat", 00:19:08.060 "superblock": true, 00:19:08.060 "num_base_bdevs": 4, 00:19:08.060 "num_base_bdevs_discovered": 1, 00:19:08.060 "num_base_bdevs_operational": 4, 00:19:08.060 "base_bdevs_list": [ 00:19:08.060 { 00:19:08.060 "name": "pt1", 00:19:08.060 "uuid": "0f8fc738-7f19-559e-b1ea-733226c22273", 00:19:08.060 "is_configured": true, 00:19:08.060 "data_offset": 2048, 00:19:08.060 "data_size": 63488 00:19:08.060 }, 00:19:08.060 { 00:19:08.060 "name": null, 00:19:08.060 "uuid": "d5efb21c-c41a-5d50-bc78-d17f1c68f27a", 00:19:08.060 "is_configured": false, 00:19:08.060 "data_offset": 2048, 00:19:08.060 "data_size": 63488 00:19:08.060 }, 00:19:08.060 { 00:19:08.060 "name": null, 00:19:08.060 "uuid": "46c8ce3b-4ce1-5ec6-a1c2-40ee03ef8d12", 00:19:08.060 "is_configured": false, 00:19:08.060 "data_offset": 2048, 00:19:08.060 "data_size": 63488 00:19:08.060 }, 00:19:08.060 { 00:19:08.060 "name": null, 00:19:08.060 "uuid": "84b80c1e-8ea1-5a82-878f-3ac23b10fd53", 00:19:08.060 "is_configured": false, 00:19:08.060 "data_offset": 2048, 00:19:08.060 "data_size": 63488 00:19:08.060 } 00:19:08.060 ] 00:19:08.060 }' 00:19:08.060 05:38:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:08.060 05:38:11 -- common/autotest_common.sh@10 -- # set +x 00:19:08.629 05:38:12 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:19:08.629 05:38:12 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:08.629 05:38:12 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:08.888 [2024-10-07 05:38:12.675477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:08.888 [2024-10-07 05:38:12.675565] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.888 [2024-10-07 05:38:12.675613] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:08.888 [2024-10-07 05:38:12.675639] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.888 [2024-10-07 05:38:12.676151] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.888 [2024-10-07 05:38:12.676218] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:08.888 [2024-10-07 05:38:12.676323] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:08.888 [2024-10-07 05:38:12.676352] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:08.888 pt2 00:19:08.888 05:38:12 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:08.888 05:38:12 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:08.888 05:38:12 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:09.146 [2024-10-07 05:38:12.951483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:09.146 [2024-10-07 05:38:12.951552] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.146 [2024-10-07 05:38:12.951584] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:09.146 [2024-10-07 05:38:12.951614] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.146 [2024-10-07 05:38:12.952039] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.146 [2024-10-07 05:38:12.952107] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:09.146 [2024-10-07 05:38:12.952191] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:09.146 [2024-10-07 05:38:12.952215] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:09.146 pt3 00:19:09.146 05:38:12 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:09.146 05:38:12 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:09.146 05:38:12 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:09.404 [2024-10-07 05:38:13.143517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:09.404 [2024-10-07 05:38:13.143589] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.404 [2024-10-07 05:38:13.143629] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:09.404 [2024-10-07 05:38:13.143660] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.404 [2024-10-07 05:38:13.144057] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.404 [2024-10-07 05:38:13.144119] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:09.405 [2024-10-07 05:38:13.144215] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:09.405 [2024-10-07 05:38:13.144242] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:09.405 [2024-10-07 05:38:13.144380] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:19:09.405 [2024-10-07 05:38:13.144395] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:09.405 [2024-10-07 05:38:13.144493] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:09.405 [2024-10-07 05:38:13.144800] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:19:09.405 [2024-10-07 05:38:13.144823] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:19:09.405 [2024-10-07 05:38:13.144949] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.405 pt4 00:19:09.405 05:38:13 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:09.405 05:38:13 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:09.405 05:38:13 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:09.405 05:38:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:09.405 05:38:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:09.405 05:38:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:09.405 05:38:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:09.405 05:38:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:09.405 05:38:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:09.405 05:38:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:09.405 05:38:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:09.405 05:38:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:09.405 05:38:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.405 05:38:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.663 05:38:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:09.663 "name": "raid_bdev1", 00:19:09.663 "uuid": "6be9ed6d-e533-494c-baf0-50d434aebe35", 00:19:09.663 "strip_size_kb": 64, 00:19:09.663 "state": "online", 00:19:09.663 "raid_level": "concat", 00:19:09.663 "superblock": true, 00:19:09.663 "num_base_bdevs": 4, 00:19:09.663 "num_base_bdevs_discovered": 4, 00:19:09.663 "num_base_bdevs_operational": 4, 00:19:09.663 "base_bdevs_list": [ 00:19:09.663 { 00:19:09.663 "name": "pt1", 00:19:09.663 "uuid": "0f8fc738-7f19-559e-b1ea-733226c22273", 00:19:09.663 "is_configured": true, 00:19:09.663 "data_offset": 2048, 00:19:09.663 "data_size": 63488 00:19:09.663 }, 00:19:09.663 { 00:19:09.663 "name": "pt2", 00:19:09.663 "uuid": "d5efb21c-c41a-5d50-bc78-d17f1c68f27a", 00:19:09.663 "is_configured": true, 00:19:09.663 "data_offset": 2048, 00:19:09.663 "data_size": 63488 00:19:09.663 }, 00:19:09.663 { 00:19:09.663 "name": "pt3", 00:19:09.663 "uuid": "46c8ce3b-4ce1-5ec6-a1c2-40ee03ef8d12", 00:19:09.663 "is_configured": true, 00:19:09.663 "data_offset": 2048, 00:19:09.663 "data_size": 63488 00:19:09.663 }, 00:19:09.663 { 00:19:09.663 "name": "pt4", 00:19:09.663 "uuid": "84b80c1e-8ea1-5a82-878f-3ac23b10fd53", 00:19:09.663 "is_configured": true, 00:19:09.663 "data_offset": 2048, 00:19:09.663 "data_size": 63488 00:19:09.663 } 00:19:09.663 ] 00:19:09.663 }' 00:19:09.663 05:38:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:09.663 05:38:13 -- common/autotest_common.sh@10 -- # set +x 00:19:10.230 05:38:14 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:10.230 05:38:14 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:19:10.489 [2024-10-07 05:38:14.255946] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:10.489 05:38:14 -- bdev/bdev_raid.sh@430 -- # '[' 6be9ed6d-e533-494c-baf0-50d434aebe35 '!=' 6be9ed6d-e533-494c-baf0-50d434aebe35 ']' 00:19:10.489 05:38:14 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:19:10.489 05:38:14 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:10.489 05:38:14 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:10.489 05:38:14 -- bdev/bdev_raid.sh@511 -- # killprocess 153982 00:19:10.489 05:38:14 -- common/autotest_common.sh@926 -- # '[' -z 153982 ']' 00:19:10.489 05:38:14 -- common/autotest_common.sh@930 -- # kill -0 153982 00:19:10.489 05:38:14 -- common/autotest_common.sh@931 -- # uname 00:19:10.489 05:38:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:10.489 05:38:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 153982 00:19:10.489 killing process with pid 153982 00:19:10.489 05:38:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:10.489 05:38:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:10.489 05:38:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 153982' 00:19:10.489 05:38:14 -- common/autotest_common.sh@945 -- # kill 153982 00:19:10.489 [2024-10-07 05:38:14.296492] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:10.489 05:38:14 -- common/autotest_common.sh@950 -- # wait 153982 00:19:10.489 [2024-10-07 05:38:14.296571] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:10.489 [2024-10-07 05:38:14.296645] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:10.489 [2024-10-07 05:38:14.296658] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:19:10.747 [2024-10-07 05:38:14.556486] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:11.683 ************************************ 00:19:11.683 END TEST raid_superblock_test 00:19:11.683 ************************************ 00:19:11.683 05:38:15 -- bdev/bdev_raid.sh@513 -- # return 0 00:19:11.683 00:19:11.683 real 0m11.771s 00:19:11.683 user 0m20.577s 00:19:11.683 sys 0m1.414s 00:19:11.683 05:38:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:11.683 05:38:15 -- common/autotest_common.sh@10 -- # set +x 00:19:11.683 05:38:15 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:19:11.683 05:38:15 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:19:11.683 05:38:15 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:11.683 05:38:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:11.683 05:38:15 -- common/autotest_common.sh@10 -- # set +x 00:19:11.683 ************************************ 00:19:11.683 START TEST raid_state_function_test 00:19:11.683 ************************************ 00:19:11.683 05:38:15 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 false 00:19:11.683 05:38:15 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:19:11.683 05:38:15 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:11.683 05:38:15 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:19:11.683 05:38:15 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:11.683 05:38:15 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:11.683 05:38:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:11.683 05:38:15 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:11.683 05:38:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:11.683 05:38:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:11.683 05:38:15 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:11.683 05:38:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:11.684 05:38:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:11.684 05:38:15 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:11.684 05:38:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:11.684 05:38:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:11.684 05:38:15 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:19:11.684 05:38:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:11.684 05:38:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:11.684 05:38:15 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:11.684 05:38:15 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:11.684 05:38:15 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:11.684 05:38:15 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:11.684 05:38:15 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:11.684 05:38:15 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:11.684 05:38:15 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:19:11.684 05:38:15 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:19:11.684 05:38:15 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:19:11.684 05:38:15 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:19:11.684 05:38:15 -- bdev/bdev_raid.sh@226 -- # raid_pid=154799 00:19:11.684 Process raid pid: 154799 00:19:11.684 05:38:15 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 154799' 00:19:11.684 05:38:15 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:11.684 05:38:15 -- bdev/bdev_raid.sh@228 -- # waitforlisten 154799 /var/tmp/spdk-raid.sock 00:19:11.684 05:38:15 -- common/autotest_common.sh@819 -- # '[' -z 154799 ']' 00:19:11.684 05:38:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:11.684 05:38:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:11.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:11.684 05:38:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:11.684 05:38:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:11.684 05:38:15 -- common/autotest_common.sh@10 -- # set +x 00:19:11.684 [2024-10-07 05:38:15.594467] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:11.684 [2024-10-07 05:38:15.594616] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:11.943 [2024-10-07 05:38:15.742162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.943 [2024-10-07 05:38:15.899260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.201 [2024-10-07 05:38:16.064892] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:12.769 05:38:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:12.769 05:38:16 -- common/autotest_common.sh@852 -- # return 0 00:19:12.769 05:38:16 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:12.769 [2024-10-07 05:38:16.682441] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:12.769 [2024-10-07 05:38:16.682609] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:12.769 [2024-10-07 05:38:16.682627] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:12.769 [2024-10-07 05:38:16.682651] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:12.769 [2024-10-07 05:38:16.682660] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:12.769 [2024-10-07 05:38:16.682701] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:12.769 [2024-10-07 05:38:16.682712] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:12.769 [2024-10-07 05:38:16.682737] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:12.769 05:38:16 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:12.769 05:38:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:12.769 05:38:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:12.769 05:38:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:12.769 05:38:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:12.769 05:38:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:12.769 05:38:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:12.769 05:38:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:12.769 05:38:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:12.769 05:38:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:12.769 05:38:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.769 05:38:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.028 05:38:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:13.028 "name": "Existed_Raid", 00:19:13.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.028 "strip_size_kb": 0, 00:19:13.028 "state": "configuring", 00:19:13.028 "raid_level": "raid1", 00:19:13.028 "superblock": false, 00:19:13.028 "num_base_bdevs": 4, 00:19:13.028 "num_base_bdevs_discovered": 0, 00:19:13.028 "num_base_bdevs_operational": 4, 00:19:13.028 "base_bdevs_list": [ 00:19:13.028 { 00:19:13.028 "name": "BaseBdev1", 00:19:13.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.028 "is_configured": false, 00:19:13.028 "data_offset": 0, 00:19:13.028 "data_size": 0 00:19:13.028 }, 00:19:13.028 { 00:19:13.028 "name": "BaseBdev2", 00:19:13.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.028 "is_configured": false, 00:19:13.028 "data_offset": 0, 00:19:13.028 "data_size": 0 00:19:13.028 }, 00:19:13.028 { 00:19:13.028 "name": "BaseBdev3", 00:19:13.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.028 "is_configured": false, 00:19:13.028 "data_offset": 0, 00:19:13.028 "data_size": 0 00:19:13.028 }, 00:19:13.028 { 00:19:13.028 "name": "BaseBdev4", 00:19:13.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.028 "is_configured": false, 00:19:13.028 "data_offset": 0, 00:19:13.028 "data_size": 0 00:19:13.028 } 00:19:13.028 ] 00:19:13.028 }' 00:19:13.028 05:38:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:13.028 05:38:16 -- common/autotest_common.sh@10 -- # set +x 00:19:13.964 05:38:17 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:13.964 [2024-10-07 05:38:17.810485] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:13.964 [2024-10-07 05:38:17.810533] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:19:13.964 05:38:17 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:14.223 [2024-10-07 05:38:18.002550] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:14.223 [2024-10-07 05:38:18.002611] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:14.223 [2024-10-07 05:38:18.002625] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:14.223 [2024-10-07 05:38:18.002652] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:14.223 [2024-10-07 05:38:18.002662] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:14.223 [2024-10-07 05:38:18.002699] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:14.223 [2024-10-07 05:38:18.002708] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:14.223 [2024-10-07 05:38:18.002734] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:14.223 05:38:18 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:14.482 [2024-10-07 05:38:18.224410] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:14.482 BaseBdev1 00:19:14.482 05:38:18 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:14.482 05:38:18 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:14.482 05:38:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:14.482 05:38:18 -- common/autotest_common.sh@889 -- # local i 00:19:14.482 05:38:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:14.482 05:38:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:14.482 05:38:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:14.482 05:38:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:14.745 [ 00:19:14.745 { 00:19:14.745 "name": "BaseBdev1", 00:19:14.745 "aliases": [ 00:19:14.745 "d80ad971-da4a-4348-90a4-16bfbd6799c0" 00:19:14.745 ], 00:19:14.745 "product_name": "Malloc disk", 00:19:14.745 "block_size": 512, 00:19:14.745 "num_blocks": 65536, 00:19:14.745 "uuid": "d80ad971-da4a-4348-90a4-16bfbd6799c0", 00:19:14.745 "assigned_rate_limits": { 00:19:14.745 "rw_ios_per_sec": 0, 00:19:14.745 "rw_mbytes_per_sec": 0, 00:19:14.745 "r_mbytes_per_sec": 0, 00:19:14.745 "w_mbytes_per_sec": 0 00:19:14.745 }, 00:19:14.745 "claimed": true, 00:19:14.745 "claim_type": "exclusive_write", 00:19:14.745 "zoned": false, 00:19:14.745 "supported_io_types": { 00:19:14.745 "read": true, 00:19:14.745 "write": true, 00:19:14.745 "unmap": true, 00:19:14.745 "write_zeroes": true, 00:19:14.745 "flush": true, 00:19:14.745 "reset": true, 00:19:14.745 "compare": false, 00:19:14.745 "compare_and_write": false, 00:19:14.746 "abort": true, 00:19:14.746 "nvme_admin": false, 00:19:14.746 "nvme_io": false 00:19:14.746 }, 00:19:14.746 "memory_domains": [ 00:19:14.746 { 00:19:14.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.746 "dma_device_type": 2 00:19:14.746 } 00:19:14.746 ], 00:19:14.746 "driver_specific": {} 00:19:14.746 } 00:19:14.746 ] 00:19:14.746 05:38:18 -- common/autotest_common.sh@895 -- # return 0 00:19:14.746 05:38:18 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:14.746 05:38:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:14.746 05:38:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:14.746 05:38:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:14.746 05:38:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:14.746 05:38:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:14.746 05:38:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:14.746 05:38:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:14.746 05:38:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:14.746 05:38:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:14.746 05:38:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.746 05:38:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.034 05:38:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:15.034 "name": "Existed_Raid", 00:19:15.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.034 "strip_size_kb": 0, 00:19:15.034 "state": "configuring", 00:19:15.034 "raid_level": "raid1", 00:19:15.034 "superblock": false, 00:19:15.034 "num_base_bdevs": 4, 00:19:15.034 "num_base_bdevs_discovered": 1, 00:19:15.034 "num_base_bdevs_operational": 4, 00:19:15.034 "base_bdevs_list": [ 00:19:15.034 { 00:19:15.034 "name": "BaseBdev1", 00:19:15.034 "uuid": "d80ad971-da4a-4348-90a4-16bfbd6799c0", 00:19:15.034 "is_configured": true, 00:19:15.034 "data_offset": 0, 00:19:15.034 "data_size": 65536 00:19:15.034 }, 00:19:15.034 { 00:19:15.034 "name": "BaseBdev2", 00:19:15.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.034 "is_configured": false, 00:19:15.034 "data_offset": 0, 00:19:15.034 "data_size": 0 00:19:15.034 }, 00:19:15.034 { 00:19:15.034 "name": "BaseBdev3", 00:19:15.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.034 "is_configured": false, 00:19:15.034 "data_offset": 0, 00:19:15.034 "data_size": 0 00:19:15.034 }, 00:19:15.034 { 00:19:15.034 "name": "BaseBdev4", 00:19:15.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.034 "is_configured": false, 00:19:15.034 "data_offset": 0, 00:19:15.034 "data_size": 0 00:19:15.034 } 00:19:15.034 ] 00:19:15.034 }' 00:19:15.034 05:38:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:15.034 05:38:18 -- common/autotest_common.sh@10 -- # set +x 00:19:15.606 05:38:19 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:15.864 [2024-10-07 05:38:19.720716] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:15.864 [2024-10-07 05:38:19.720778] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:15.864 05:38:19 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:19:15.864 05:38:19 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:16.122 [2024-10-07 05:38:19.984749] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:16.122 [2024-10-07 05:38:19.986588] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:16.122 [2024-10-07 05:38:19.986667] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:16.122 [2024-10-07 05:38:19.986681] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:16.122 [2024-10-07 05:38:19.986710] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:16.122 [2024-10-07 05:38:19.986719] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:16.122 [2024-10-07 05:38:19.986739] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:16.122 05:38:19 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:16.122 05:38:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:16.122 05:38:20 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:16.122 05:38:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:16.122 05:38:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:16.122 05:38:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:16.122 05:38:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:16.122 05:38:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:16.122 05:38:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:16.122 05:38:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:16.122 05:38:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:16.122 05:38:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:16.122 05:38:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.122 05:38:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.381 05:38:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:16.381 "name": "Existed_Raid", 00:19:16.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.381 "strip_size_kb": 0, 00:19:16.381 "state": "configuring", 00:19:16.381 "raid_level": "raid1", 00:19:16.381 "superblock": false, 00:19:16.381 "num_base_bdevs": 4, 00:19:16.381 "num_base_bdevs_discovered": 1, 00:19:16.381 "num_base_bdevs_operational": 4, 00:19:16.381 "base_bdevs_list": [ 00:19:16.381 { 00:19:16.381 "name": "BaseBdev1", 00:19:16.381 "uuid": "d80ad971-da4a-4348-90a4-16bfbd6799c0", 00:19:16.381 "is_configured": true, 00:19:16.381 "data_offset": 0, 00:19:16.381 "data_size": 65536 00:19:16.381 }, 00:19:16.381 { 00:19:16.381 "name": "BaseBdev2", 00:19:16.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.381 "is_configured": false, 00:19:16.381 "data_offset": 0, 00:19:16.381 "data_size": 0 00:19:16.381 }, 00:19:16.381 { 00:19:16.381 "name": "BaseBdev3", 00:19:16.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.381 "is_configured": false, 00:19:16.381 "data_offset": 0, 00:19:16.381 "data_size": 0 00:19:16.381 }, 00:19:16.381 { 00:19:16.381 "name": "BaseBdev4", 00:19:16.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.381 "is_configured": false, 00:19:16.381 "data_offset": 0, 00:19:16.381 "data_size": 0 00:19:16.381 } 00:19:16.381 ] 00:19:16.381 }' 00:19:16.381 05:38:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:16.381 05:38:20 -- common/autotest_common.sh@10 -- # set +x 00:19:16.948 05:38:20 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:17.206 [2024-10-07 05:38:21.088280] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:17.206 BaseBdev2 00:19:17.206 05:38:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:17.206 05:38:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:17.206 05:38:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:17.206 05:38:21 -- common/autotest_common.sh@889 -- # local i 00:19:17.206 05:38:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:17.206 05:38:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:17.206 05:38:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:17.465 05:38:21 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:17.723 [ 00:19:17.723 { 00:19:17.723 "name": "BaseBdev2", 00:19:17.723 "aliases": [ 00:19:17.723 "6d2ca97e-90cc-4618-b9f8-c53a0f629e1e" 00:19:17.723 ], 00:19:17.723 "product_name": "Malloc disk", 00:19:17.723 "block_size": 512, 00:19:17.723 "num_blocks": 65536, 00:19:17.723 "uuid": "6d2ca97e-90cc-4618-b9f8-c53a0f629e1e", 00:19:17.723 "assigned_rate_limits": { 00:19:17.723 "rw_ios_per_sec": 0, 00:19:17.723 "rw_mbytes_per_sec": 0, 00:19:17.723 "r_mbytes_per_sec": 0, 00:19:17.723 "w_mbytes_per_sec": 0 00:19:17.723 }, 00:19:17.723 "claimed": true, 00:19:17.723 "claim_type": "exclusive_write", 00:19:17.723 "zoned": false, 00:19:17.723 "supported_io_types": { 00:19:17.723 "read": true, 00:19:17.723 "write": true, 00:19:17.723 "unmap": true, 00:19:17.723 "write_zeroes": true, 00:19:17.723 "flush": true, 00:19:17.723 "reset": true, 00:19:17.723 "compare": false, 00:19:17.723 "compare_and_write": false, 00:19:17.723 "abort": true, 00:19:17.723 "nvme_admin": false, 00:19:17.723 "nvme_io": false 00:19:17.723 }, 00:19:17.723 "memory_domains": [ 00:19:17.723 { 00:19:17.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.723 "dma_device_type": 2 00:19:17.723 } 00:19:17.723 ], 00:19:17.723 "driver_specific": {} 00:19:17.723 } 00:19:17.723 ] 00:19:17.723 05:38:21 -- common/autotest_common.sh@895 -- # return 0 00:19:17.723 05:38:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:17.723 05:38:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:17.723 05:38:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:17.723 05:38:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:17.723 05:38:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:17.723 05:38:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:17.723 05:38:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:17.723 05:38:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:17.723 05:38:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:17.723 05:38:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:17.723 05:38:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:17.723 05:38:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:17.723 05:38:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.723 05:38:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.981 05:38:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:17.981 "name": "Existed_Raid", 00:19:17.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.981 "strip_size_kb": 0, 00:19:17.981 "state": "configuring", 00:19:17.981 "raid_level": "raid1", 00:19:17.981 "superblock": false, 00:19:17.981 "num_base_bdevs": 4, 00:19:17.981 "num_base_bdevs_discovered": 2, 00:19:17.981 "num_base_bdevs_operational": 4, 00:19:17.981 "base_bdevs_list": [ 00:19:17.981 { 00:19:17.981 "name": "BaseBdev1", 00:19:17.981 "uuid": "d80ad971-da4a-4348-90a4-16bfbd6799c0", 00:19:17.981 "is_configured": true, 00:19:17.981 "data_offset": 0, 00:19:17.981 "data_size": 65536 00:19:17.981 }, 00:19:17.981 { 00:19:17.981 "name": "BaseBdev2", 00:19:17.981 "uuid": "6d2ca97e-90cc-4618-b9f8-c53a0f629e1e", 00:19:17.981 "is_configured": true, 00:19:17.981 "data_offset": 0, 00:19:17.981 "data_size": 65536 00:19:17.981 }, 00:19:17.981 { 00:19:17.981 "name": "BaseBdev3", 00:19:17.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.981 "is_configured": false, 00:19:17.981 "data_offset": 0, 00:19:17.981 "data_size": 0 00:19:17.981 }, 00:19:17.981 { 00:19:17.981 "name": "BaseBdev4", 00:19:17.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.981 "is_configured": false, 00:19:17.981 "data_offset": 0, 00:19:17.981 "data_size": 0 00:19:17.981 } 00:19:17.981 ] 00:19:17.981 }' 00:19:17.981 05:38:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:17.981 05:38:21 -- common/autotest_common.sh@10 -- # set +x 00:19:18.549 05:38:22 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:18.807 [2024-10-07 05:38:22.688427] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:18.807 BaseBdev3 00:19:18.807 05:38:22 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:18.807 05:38:22 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:18.807 05:38:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:18.807 05:38:22 -- common/autotest_common.sh@889 -- # local i 00:19:18.807 05:38:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:18.807 05:38:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:18.807 05:38:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:19.067 05:38:22 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:19.326 [ 00:19:19.326 { 00:19:19.326 "name": "BaseBdev3", 00:19:19.326 "aliases": [ 00:19:19.326 "8656c32b-538f-4331-a2bf-20098b886a47" 00:19:19.326 ], 00:19:19.326 "product_name": "Malloc disk", 00:19:19.326 "block_size": 512, 00:19:19.326 "num_blocks": 65536, 00:19:19.326 "uuid": "8656c32b-538f-4331-a2bf-20098b886a47", 00:19:19.326 "assigned_rate_limits": { 00:19:19.326 "rw_ios_per_sec": 0, 00:19:19.326 "rw_mbytes_per_sec": 0, 00:19:19.326 "r_mbytes_per_sec": 0, 00:19:19.326 "w_mbytes_per_sec": 0 00:19:19.326 }, 00:19:19.326 "claimed": true, 00:19:19.326 "claim_type": "exclusive_write", 00:19:19.326 "zoned": false, 00:19:19.326 "supported_io_types": { 00:19:19.326 "read": true, 00:19:19.326 "write": true, 00:19:19.326 "unmap": true, 00:19:19.326 "write_zeroes": true, 00:19:19.326 "flush": true, 00:19:19.326 "reset": true, 00:19:19.326 "compare": false, 00:19:19.326 "compare_and_write": false, 00:19:19.326 "abort": true, 00:19:19.326 "nvme_admin": false, 00:19:19.326 "nvme_io": false 00:19:19.326 }, 00:19:19.326 "memory_domains": [ 00:19:19.326 { 00:19:19.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:19.326 "dma_device_type": 2 00:19:19.326 } 00:19:19.326 ], 00:19:19.326 "driver_specific": {} 00:19:19.326 } 00:19:19.326 ] 00:19:19.326 05:38:23 -- common/autotest_common.sh@895 -- # return 0 00:19:19.326 05:38:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:19.326 05:38:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:19.326 05:38:23 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:19.326 05:38:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:19.326 05:38:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:19.326 05:38:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:19.326 05:38:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:19.326 05:38:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:19.326 05:38:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:19.326 05:38:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:19.326 05:38:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:19.326 05:38:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:19.326 05:38:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.326 05:38:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.585 05:38:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:19.585 "name": "Existed_Raid", 00:19:19.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.585 "strip_size_kb": 0, 00:19:19.585 "state": "configuring", 00:19:19.585 "raid_level": "raid1", 00:19:19.585 "superblock": false, 00:19:19.585 "num_base_bdevs": 4, 00:19:19.585 "num_base_bdevs_discovered": 3, 00:19:19.585 "num_base_bdevs_operational": 4, 00:19:19.585 "base_bdevs_list": [ 00:19:19.585 { 00:19:19.585 "name": "BaseBdev1", 00:19:19.585 "uuid": "d80ad971-da4a-4348-90a4-16bfbd6799c0", 00:19:19.585 "is_configured": true, 00:19:19.585 "data_offset": 0, 00:19:19.585 "data_size": 65536 00:19:19.585 }, 00:19:19.585 { 00:19:19.585 "name": "BaseBdev2", 00:19:19.585 "uuid": "6d2ca97e-90cc-4618-b9f8-c53a0f629e1e", 00:19:19.585 "is_configured": true, 00:19:19.585 "data_offset": 0, 00:19:19.585 "data_size": 65536 00:19:19.585 }, 00:19:19.585 { 00:19:19.585 "name": "BaseBdev3", 00:19:19.585 "uuid": "8656c32b-538f-4331-a2bf-20098b886a47", 00:19:19.585 "is_configured": true, 00:19:19.585 "data_offset": 0, 00:19:19.585 "data_size": 65536 00:19:19.585 }, 00:19:19.585 { 00:19:19.585 "name": "BaseBdev4", 00:19:19.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.585 "is_configured": false, 00:19:19.585 "data_offset": 0, 00:19:19.585 "data_size": 0 00:19:19.585 } 00:19:19.585 ] 00:19:19.585 }' 00:19:19.585 05:38:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:19.585 05:38:23 -- common/autotest_common.sh@10 -- # set +x 00:19:20.152 05:38:23 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:20.410 [2024-10-07 05:38:24.276534] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:20.410 [2024-10-07 05:38:24.276762] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:19:20.410 [2024-10-07 05:38:24.276814] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:20.410 [2024-10-07 05:38:24.277080] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:19:20.411 [2024-10-07 05:38:24.277577] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:19:20.411 [2024-10-07 05:38:24.277720] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:19:20.411 [2024-10-07 05:38:24.278075] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.411 BaseBdev4 00:19:20.411 05:38:24 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:20.411 05:38:24 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:19:20.411 05:38:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:20.411 05:38:24 -- common/autotest_common.sh@889 -- # local i 00:19:20.411 05:38:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:20.411 05:38:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:20.411 05:38:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:20.671 05:38:24 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:20.930 [ 00:19:20.930 { 00:19:20.930 "name": "BaseBdev4", 00:19:20.930 "aliases": [ 00:19:20.930 "853da67e-78c0-4f5d-859c-7da26b38c57b" 00:19:20.930 ], 00:19:20.930 "product_name": "Malloc disk", 00:19:20.930 "block_size": 512, 00:19:20.930 "num_blocks": 65536, 00:19:20.930 "uuid": "853da67e-78c0-4f5d-859c-7da26b38c57b", 00:19:20.930 "assigned_rate_limits": { 00:19:20.930 "rw_ios_per_sec": 0, 00:19:20.930 "rw_mbytes_per_sec": 0, 00:19:20.930 "r_mbytes_per_sec": 0, 00:19:20.930 "w_mbytes_per_sec": 0 00:19:20.930 }, 00:19:20.930 "claimed": true, 00:19:20.930 "claim_type": "exclusive_write", 00:19:20.930 "zoned": false, 00:19:20.930 "supported_io_types": { 00:19:20.930 "read": true, 00:19:20.930 "write": true, 00:19:20.930 "unmap": true, 00:19:20.930 "write_zeroes": true, 00:19:20.930 "flush": true, 00:19:20.930 "reset": true, 00:19:20.930 "compare": false, 00:19:20.930 "compare_and_write": false, 00:19:20.930 "abort": true, 00:19:20.930 "nvme_admin": false, 00:19:20.930 "nvme_io": false 00:19:20.930 }, 00:19:20.930 "memory_domains": [ 00:19:20.930 { 00:19:20.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.930 "dma_device_type": 2 00:19:20.930 } 00:19:20.930 ], 00:19:20.930 "driver_specific": {} 00:19:20.930 } 00:19:20.930 ] 00:19:20.930 05:38:24 -- common/autotest_common.sh@895 -- # return 0 00:19:20.930 05:38:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:20.930 05:38:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:20.930 05:38:24 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:20.930 05:38:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:20.930 05:38:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:20.930 05:38:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:20.930 05:38:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:20.930 05:38:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:20.930 05:38:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:20.930 05:38:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:20.930 05:38:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:20.930 05:38:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:20.930 05:38:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.930 05:38:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.189 05:38:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:21.189 "name": "Existed_Raid", 00:19:21.189 "uuid": "42b8ab46-1813-48aa-8b26-dd76437ec6ea", 00:19:21.189 "strip_size_kb": 0, 00:19:21.189 "state": "online", 00:19:21.189 "raid_level": "raid1", 00:19:21.189 "superblock": false, 00:19:21.189 "num_base_bdevs": 4, 00:19:21.189 "num_base_bdevs_discovered": 4, 00:19:21.189 "num_base_bdevs_operational": 4, 00:19:21.189 "base_bdevs_list": [ 00:19:21.189 { 00:19:21.189 "name": "BaseBdev1", 00:19:21.189 "uuid": "d80ad971-da4a-4348-90a4-16bfbd6799c0", 00:19:21.189 "is_configured": true, 00:19:21.189 "data_offset": 0, 00:19:21.189 "data_size": 65536 00:19:21.189 }, 00:19:21.189 { 00:19:21.189 "name": "BaseBdev2", 00:19:21.189 "uuid": "6d2ca97e-90cc-4618-b9f8-c53a0f629e1e", 00:19:21.189 "is_configured": true, 00:19:21.189 "data_offset": 0, 00:19:21.189 "data_size": 65536 00:19:21.189 }, 00:19:21.189 { 00:19:21.189 "name": "BaseBdev3", 00:19:21.189 "uuid": "8656c32b-538f-4331-a2bf-20098b886a47", 00:19:21.189 "is_configured": true, 00:19:21.189 "data_offset": 0, 00:19:21.189 "data_size": 65536 00:19:21.189 }, 00:19:21.189 { 00:19:21.189 "name": "BaseBdev4", 00:19:21.189 "uuid": "853da67e-78c0-4f5d-859c-7da26b38c57b", 00:19:21.189 "is_configured": true, 00:19:21.189 "data_offset": 0, 00:19:21.189 "data_size": 65536 00:19:21.189 } 00:19:21.189 ] 00:19:21.189 }' 00:19:21.189 05:38:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:21.189 05:38:24 -- common/autotest_common.sh@10 -- # set +x 00:19:21.756 05:38:25 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:22.014 [2024-10-07 05:38:25.844837] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:22.014 05:38:25 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:22.014 05:38:25 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:19:22.014 05:38:25 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:22.014 05:38:25 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:22.014 05:38:25 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:19:22.014 05:38:25 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:22.014 05:38:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:22.014 05:38:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:22.014 05:38:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:22.014 05:38:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:22.014 05:38:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:22.014 05:38:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:22.014 05:38:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:22.014 05:38:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:22.014 05:38:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:22.014 05:38:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.014 05:38:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:22.272 05:38:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:22.272 "name": "Existed_Raid", 00:19:22.272 "uuid": "42b8ab46-1813-48aa-8b26-dd76437ec6ea", 00:19:22.272 "strip_size_kb": 0, 00:19:22.272 "state": "online", 00:19:22.272 "raid_level": "raid1", 00:19:22.272 "superblock": false, 00:19:22.272 "num_base_bdevs": 4, 00:19:22.272 "num_base_bdevs_discovered": 3, 00:19:22.272 "num_base_bdevs_operational": 3, 00:19:22.272 "base_bdevs_list": [ 00:19:22.272 { 00:19:22.272 "name": null, 00:19:22.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.272 "is_configured": false, 00:19:22.272 "data_offset": 0, 00:19:22.272 "data_size": 65536 00:19:22.272 }, 00:19:22.272 { 00:19:22.272 "name": "BaseBdev2", 00:19:22.272 "uuid": "6d2ca97e-90cc-4618-b9f8-c53a0f629e1e", 00:19:22.272 "is_configured": true, 00:19:22.272 "data_offset": 0, 00:19:22.272 "data_size": 65536 00:19:22.272 }, 00:19:22.272 { 00:19:22.272 "name": "BaseBdev3", 00:19:22.272 "uuid": "8656c32b-538f-4331-a2bf-20098b886a47", 00:19:22.272 "is_configured": true, 00:19:22.272 "data_offset": 0, 00:19:22.272 "data_size": 65536 00:19:22.272 }, 00:19:22.272 { 00:19:22.272 "name": "BaseBdev4", 00:19:22.272 "uuid": "853da67e-78c0-4f5d-859c-7da26b38c57b", 00:19:22.272 "is_configured": true, 00:19:22.272 "data_offset": 0, 00:19:22.272 "data_size": 65536 00:19:22.272 } 00:19:22.272 ] 00:19:22.272 }' 00:19:22.272 05:38:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:22.272 05:38:26 -- common/autotest_common.sh@10 -- # set +x 00:19:22.839 05:38:26 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:22.839 05:38:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:22.839 05:38:26 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.839 05:38:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:23.097 05:38:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:23.097 05:38:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:23.097 05:38:26 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:23.355 [2024-10-07 05:38:27.164545] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:23.355 05:38:27 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:23.355 05:38:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:23.355 05:38:27 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.355 05:38:27 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:23.613 05:38:27 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:23.613 05:38:27 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:23.613 05:38:27 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:23.872 [2024-10-07 05:38:27.680573] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:23.873 05:38:27 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:23.873 05:38:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:23.873 05:38:27 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.873 05:38:27 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:24.131 05:38:28 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:24.131 05:38:28 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:24.131 05:38:28 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:24.389 [2024-10-07 05:38:28.197278] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:24.389 [2024-10-07 05:38:28.197453] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:24.389 [2024-10-07 05:38:28.197628] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:24.389 [2024-10-07 05:38:28.262099] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:24.389 [2024-10-07 05:38:28.262341] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:19:24.389 05:38:28 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:24.389 05:38:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:24.389 05:38:28 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.389 05:38:28 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:24.645 05:38:28 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:24.646 05:38:28 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:24.646 05:38:28 -- bdev/bdev_raid.sh@287 -- # killprocess 154799 00:19:24.646 05:38:28 -- common/autotest_common.sh@926 -- # '[' -z 154799 ']' 00:19:24.646 05:38:28 -- common/autotest_common.sh@930 -- # kill -0 154799 00:19:24.646 05:38:28 -- common/autotest_common.sh@931 -- # uname 00:19:24.646 05:38:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:24.646 05:38:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 154799 00:19:24.646 killing process with pid 154799 00:19:24.646 05:38:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:24.646 05:38:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:24.646 05:38:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 154799' 00:19:24.646 05:38:28 -- common/autotest_common.sh@945 -- # kill 154799 00:19:24.646 05:38:28 -- common/autotest_common.sh@950 -- # wait 154799 00:19:24.646 [2024-10-07 05:38:28.491544] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:24.646 [2024-10-07 05:38:28.491661] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:25.579 ************************************ 00:19:25.579 END TEST raid_state_function_test 00:19:25.579 ************************************ 00:19:25.579 05:38:29 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:25.579 00:19:25.579 real 0m13.875s 00:19:25.579 user 0m24.793s 00:19:25.579 sys 0m1.612s 00:19:25.579 05:38:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:25.579 05:38:29 -- common/autotest_common.sh@10 -- # set +x 00:19:25.579 05:38:29 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:19:25.579 05:38:29 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:25.579 05:38:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:25.579 05:38:29 -- common/autotest_common.sh@10 -- # set +x 00:19:25.579 ************************************ 00:19:25.579 START TEST raid_state_function_test_sb 00:19:25.579 ************************************ 00:19:25.579 05:38:29 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 true 00:19:25.579 05:38:29 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:19:25.579 05:38:29 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:25.579 05:38:29 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:19:25.579 05:38:29 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:25.579 05:38:29 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:25.579 05:38:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:25.579 05:38:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:25.579 05:38:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:25.579 05:38:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:25.579 05:38:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:25.579 05:38:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:25.579 05:38:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:25.579 05:38:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:25.579 05:38:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:25.579 05:38:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:25.579 05:38:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:19:25.579 05:38:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:25.579 05:38:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:25.579 05:38:29 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:25.579 05:38:29 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:25.579 05:38:29 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:25.579 05:38:29 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:25.579 05:38:29 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:25.579 05:38:29 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:25.580 05:38:29 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:19:25.580 05:38:29 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:19:25.580 05:38:29 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:19:25.580 05:38:29 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:19:25.580 05:38:29 -- bdev/bdev_raid.sh@226 -- # raid_pid=155634 00:19:25.580 05:38:29 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 155634' 00:19:25.580 Process raid pid: 155634 00:19:25.580 05:38:29 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:25.580 05:38:29 -- bdev/bdev_raid.sh@228 -- # waitforlisten 155634 /var/tmp/spdk-raid.sock 00:19:25.580 05:38:29 -- common/autotest_common.sh@819 -- # '[' -z 155634 ']' 00:19:25.580 05:38:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:25.580 05:38:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:25.580 05:38:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:25.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:25.580 05:38:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:25.580 05:38:29 -- common/autotest_common.sh@10 -- # set +x 00:19:25.580 [2024-10-07 05:38:29.547368] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:25.580 [2024-10-07 05:38:29.547577] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.838 [2024-10-07 05:38:29.711283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.097 [2024-10-07 05:38:29.882537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.097 [2024-10-07 05:38:30.051022] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:26.666 05:38:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:26.666 05:38:30 -- common/autotest_common.sh@852 -- # return 0 00:19:26.666 05:38:30 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:26.925 [2024-10-07 05:38:30.763505] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:26.925 [2024-10-07 05:38:30.763591] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:26.925 [2024-10-07 05:38:30.763612] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:26.925 [2024-10-07 05:38:30.763636] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:26.925 [2024-10-07 05:38:30.763647] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:26.925 [2024-10-07 05:38:30.763688] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:26.925 [2024-10-07 05:38:30.763698] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:26.925 [2024-10-07 05:38:30.763724] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:26.925 05:38:30 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:26.925 05:38:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:26.925 05:38:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:26.925 05:38:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:26.925 05:38:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:26.925 05:38:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:26.925 05:38:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:26.925 05:38:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:26.925 05:38:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:26.925 05:38:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:26.925 05:38:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.925 05:38:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:27.184 05:38:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:27.184 "name": "Existed_Raid", 00:19:27.184 "uuid": "32429429-b19b-4b64-8200-6bc96406c54a", 00:19:27.184 "strip_size_kb": 0, 00:19:27.184 "state": "configuring", 00:19:27.184 "raid_level": "raid1", 00:19:27.184 "superblock": true, 00:19:27.184 "num_base_bdevs": 4, 00:19:27.184 "num_base_bdevs_discovered": 0, 00:19:27.184 "num_base_bdevs_operational": 4, 00:19:27.184 "base_bdevs_list": [ 00:19:27.184 { 00:19:27.184 "name": "BaseBdev1", 00:19:27.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.184 "is_configured": false, 00:19:27.184 "data_offset": 0, 00:19:27.184 "data_size": 0 00:19:27.184 }, 00:19:27.184 { 00:19:27.184 "name": "BaseBdev2", 00:19:27.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.184 "is_configured": false, 00:19:27.184 "data_offset": 0, 00:19:27.184 "data_size": 0 00:19:27.184 }, 00:19:27.184 { 00:19:27.184 "name": "BaseBdev3", 00:19:27.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.184 "is_configured": false, 00:19:27.184 "data_offset": 0, 00:19:27.184 "data_size": 0 00:19:27.184 }, 00:19:27.184 { 00:19:27.184 "name": "BaseBdev4", 00:19:27.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.184 "is_configured": false, 00:19:27.184 "data_offset": 0, 00:19:27.184 "data_size": 0 00:19:27.184 } 00:19:27.184 ] 00:19:27.184 }' 00:19:27.184 05:38:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:27.184 05:38:31 -- common/autotest_common.sh@10 -- # set +x 00:19:27.751 05:38:31 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:28.010 [2024-10-07 05:38:31.963584] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:28.010 [2024-10-07 05:38:31.963630] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:19:28.010 05:38:31 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:28.269 [2024-10-07 05:38:32.235644] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:28.269 [2024-10-07 05:38:32.235715] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:28.269 [2024-10-07 05:38:32.235739] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:28.269 [2024-10-07 05:38:32.235770] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:28.269 [2024-10-07 05:38:32.235780] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:28.269 [2024-10-07 05:38:32.235824] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:28.269 [2024-10-07 05:38:32.235834] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:28.269 [2024-10-07 05:38:32.235860] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:28.528 05:38:32 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:28.786 [2024-10-07 05:38:32.537527] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:28.786 BaseBdev1 00:19:28.787 05:38:32 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:28.787 05:38:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:28.787 05:38:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:28.787 05:38:32 -- common/autotest_common.sh@889 -- # local i 00:19:28.787 05:38:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:28.787 05:38:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:28.787 05:38:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:29.045 05:38:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:29.304 [ 00:19:29.304 { 00:19:29.304 "name": "BaseBdev1", 00:19:29.304 "aliases": [ 00:19:29.304 "29d8244c-a6ca-45fe-bbf0-c5d0caf849b2" 00:19:29.304 ], 00:19:29.304 "product_name": "Malloc disk", 00:19:29.304 "block_size": 512, 00:19:29.304 "num_blocks": 65536, 00:19:29.304 "uuid": "29d8244c-a6ca-45fe-bbf0-c5d0caf849b2", 00:19:29.304 "assigned_rate_limits": { 00:19:29.304 "rw_ios_per_sec": 0, 00:19:29.304 "rw_mbytes_per_sec": 0, 00:19:29.304 "r_mbytes_per_sec": 0, 00:19:29.304 "w_mbytes_per_sec": 0 00:19:29.304 }, 00:19:29.304 "claimed": true, 00:19:29.304 "claim_type": "exclusive_write", 00:19:29.304 "zoned": false, 00:19:29.304 "supported_io_types": { 00:19:29.304 "read": true, 00:19:29.304 "write": true, 00:19:29.304 "unmap": true, 00:19:29.304 "write_zeroes": true, 00:19:29.304 "flush": true, 00:19:29.304 "reset": true, 00:19:29.304 "compare": false, 00:19:29.304 "compare_and_write": false, 00:19:29.304 "abort": true, 00:19:29.304 "nvme_admin": false, 00:19:29.304 "nvme_io": false 00:19:29.304 }, 00:19:29.304 "memory_domains": [ 00:19:29.304 { 00:19:29.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.304 "dma_device_type": 2 00:19:29.304 } 00:19:29.304 ], 00:19:29.304 "driver_specific": {} 00:19:29.304 } 00:19:29.304 ] 00:19:29.304 05:38:33 -- common/autotest_common.sh@895 -- # return 0 00:19:29.304 05:38:33 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:29.304 05:38:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:29.304 05:38:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:29.304 05:38:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:29.304 05:38:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:29.304 05:38:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:29.305 05:38:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:29.305 05:38:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:29.305 05:38:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:29.305 05:38:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:29.305 05:38:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.305 05:38:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.563 05:38:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:29.563 "name": "Existed_Raid", 00:19:29.563 "uuid": "bac8bec2-acde-4e99-9076-7fcf8238da4c", 00:19:29.563 "strip_size_kb": 0, 00:19:29.563 "state": "configuring", 00:19:29.563 "raid_level": "raid1", 00:19:29.563 "superblock": true, 00:19:29.563 "num_base_bdevs": 4, 00:19:29.563 "num_base_bdevs_discovered": 1, 00:19:29.563 "num_base_bdevs_operational": 4, 00:19:29.563 "base_bdevs_list": [ 00:19:29.563 { 00:19:29.563 "name": "BaseBdev1", 00:19:29.563 "uuid": "29d8244c-a6ca-45fe-bbf0-c5d0caf849b2", 00:19:29.563 "is_configured": true, 00:19:29.563 "data_offset": 2048, 00:19:29.563 "data_size": 63488 00:19:29.563 }, 00:19:29.563 { 00:19:29.563 "name": "BaseBdev2", 00:19:29.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.563 "is_configured": false, 00:19:29.564 "data_offset": 0, 00:19:29.564 "data_size": 0 00:19:29.564 }, 00:19:29.564 { 00:19:29.564 "name": "BaseBdev3", 00:19:29.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.564 "is_configured": false, 00:19:29.564 "data_offset": 0, 00:19:29.564 "data_size": 0 00:19:29.564 }, 00:19:29.564 { 00:19:29.564 "name": "BaseBdev4", 00:19:29.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.564 "is_configured": false, 00:19:29.564 "data_offset": 0, 00:19:29.564 "data_size": 0 00:19:29.564 } 00:19:29.564 ] 00:19:29.564 }' 00:19:29.564 05:38:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:29.564 05:38:33 -- common/autotest_common.sh@10 -- # set +x 00:19:30.131 05:38:33 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:30.390 [2024-10-07 05:38:34.133841] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:30.390 [2024-10-07 05:38:34.133911] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:30.390 05:38:34 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:19:30.390 05:38:34 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:30.649 05:38:34 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:30.908 BaseBdev1 00:19:30.908 05:38:34 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:19:30.908 05:38:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:30.908 05:38:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:30.908 05:38:34 -- common/autotest_common.sh@889 -- # local i 00:19:30.908 05:38:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:30.908 05:38:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:30.908 05:38:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:31.167 05:38:35 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:31.426 [ 00:19:31.426 { 00:19:31.426 "name": "BaseBdev1", 00:19:31.426 "aliases": [ 00:19:31.426 "4389801a-6b89-4eb0-82c8-3a37f7a24ffe" 00:19:31.426 ], 00:19:31.426 "product_name": "Malloc disk", 00:19:31.426 "block_size": 512, 00:19:31.426 "num_blocks": 65536, 00:19:31.426 "uuid": "4389801a-6b89-4eb0-82c8-3a37f7a24ffe", 00:19:31.426 "assigned_rate_limits": { 00:19:31.426 "rw_ios_per_sec": 0, 00:19:31.426 "rw_mbytes_per_sec": 0, 00:19:31.426 "r_mbytes_per_sec": 0, 00:19:31.426 "w_mbytes_per_sec": 0 00:19:31.426 }, 00:19:31.426 "claimed": false, 00:19:31.426 "zoned": false, 00:19:31.426 "supported_io_types": { 00:19:31.426 "read": true, 00:19:31.426 "write": true, 00:19:31.426 "unmap": true, 00:19:31.426 "write_zeroes": true, 00:19:31.426 "flush": true, 00:19:31.426 "reset": true, 00:19:31.426 "compare": false, 00:19:31.426 "compare_and_write": false, 00:19:31.426 "abort": true, 00:19:31.426 "nvme_admin": false, 00:19:31.426 "nvme_io": false 00:19:31.426 }, 00:19:31.426 "memory_domains": [ 00:19:31.426 { 00:19:31.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.426 "dma_device_type": 2 00:19:31.426 } 00:19:31.426 ], 00:19:31.426 "driver_specific": {} 00:19:31.426 } 00:19:31.426 ] 00:19:31.426 05:38:35 -- common/autotest_common.sh@895 -- # return 0 00:19:31.426 05:38:35 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:31.687 [2024-10-07 05:38:35.450210] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:31.687 [2024-10-07 05:38:35.452099] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:31.687 [2024-10-07 05:38:35.452178] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:31.687 [2024-10-07 05:38:35.452193] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:31.687 [2024-10-07 05:38:35.452221] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:31.687 [2024-10-07 05:38:35.452231] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:31.687 [2024-10-07 05:38:35.452249] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:31.687 05:38:35 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:31.687 05:38:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:31.687 05:38:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:31.687 05:38:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:31.687 05:38:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:31.687 05:38:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:31.687 05:38:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:31.687 05:38:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:31.687 05:38:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:31.687 05:38:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:31.687 05:38:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:31.687 05:38:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:31.687 05:38:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.687 05:38:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.687 05:38:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:31.687 "name": "Existed_Raid", 00:19:31.687 "uuid": "3f9b8526-a407-40ac-8647-b4eabae36faa", 00:19:31.687 "strip_size_kb": 0, 00:19:31.687 "state": "configuring", 00:19:31.687 "raid_level": "raid1", 00:19:31.687 "superblock": true, 00:19:31.687 "num_base_bdevs": 4, 00:19:31.687 "num_base_bdevs_discovered": 1, 00:19:31.687 "num_base_bdevs_operational": 4, 00:19:31.687 "base_bdevs_list": [ 00:19:31.687 { 00:19:31.687 "name": "BaseBdev1", 00:19:31.687 "uuid": "4389801a-6b89-4eb0-82c8-3a37f7a24ffe", 00:19:31.687 "is_configured": true, 00:19:31.687 "data_offset": 2048, 00:19:31.687 "data_size": 63488 00:19:31.687 }, 00:19:31.687 { 00:19:31.687 "name": "BaseBdev2", 00:19:31.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.687 "is_configured": false, 00:19:31.687 "data_offset": 0, 00:19:31.687 "data_size": 0 00:19:31.687 }, 00:19:31.687 { 00:19:31.687 "name": "BaseBdev3", 00:19:31.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.687 "is_configured": false, 00:19:31.687 "data_offset": 0, 00:19:31.687 "data_size": 0 00:19:31.687 }, 00:19:31.687 { 00:19:31.687 "name": "BaseBdev4", 00:19:31.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.687 "is_configured": false, 00:19:31.687 "data_offset": 0, 00:19:31.687 "data_size": 0 00:19:31.687 } 00:19:31.687 ] 00:19:31.687 }' 00:19:31.687 05:38:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:31.687 05:38:35 -- common/autotest_common.sh@10 -- # set +x 00:19:32.623 05:38:36 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:32.623 [2024-10-07 05:38:36.477448] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:32.623 BaseBdev2 00:19:32.623 05:38:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:32.623 05:38:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:32.623 05:38:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:32.623 05:38:36 -- common/autotest_common.sh@889 -- # local i 00:19:32.623 05:38:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:32.623 05:38:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:32.623 05:38:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:32.882 05:38:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:33.140 [ 00:19:33.140 { 00:19:33.140 "name": "BaseBdev2", 00:19:33.140 "aliases": [ 00:19:33.140 "73cdf7b7-1af6-454d-a76d-96686c4ccc77" 00:19:33.140 ], 00:19:33.140 "product_name": "Malloc disk", 00:19:33.140 "block_size": 512, 00:19:33.140 "num_blocks": 65536, 00:19:33.140 "uuid": "73cdf7b7-1af6-454d-a76d-96686c4ccc77", 00:19:33.140 "assigned_rate_limits": { 00:19:33.140 "rw_ios_per_sec": 0, 00:19:33.140 "rw_mbytes_per_sec": 0, 00:19:33.140 "r_mbytes_per_sec": 0, 00:19:33.140 "w_mbytes_per_sec": 0 00:19:33.140 }, 00:19:33.140 "claimed": true, 00:19:33.140 "claim_type": "exclusive_write", 00:19:33.140 "zoned": false, 00:19:33.140 "supported_io_types": { 00:19:33.140 "read": true, 00:19:33.140 "write": true, 00:19:33.140 "unmap": true, 00:19:33.140 "write_zeroes": true, 00:19:33.140 "flush": true, 00:19:33.140 "reset": true, 00:19:33.140 "compare": false, 00:19:33.140 "compare_and_write": false, 00:19:33.140 "abort": true, 00:19:33.140 "nvme_admin": false, 00:19:33.140 "nvme_io": false 00:19:33.140 }, 00:19:33.140 "memory_domains": [ 00:19:33.140 { 00:19:33.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.140 "dma_device_type": 2 00:19:33.140 } 00:19:33.140 ], 00:19:33.140 "driver_specific": {} 00:19:33.140 } 00:19:33.140 ] 00:19:33.140 05:38:36 -- common/autotest_common.sh@895 -- # return 0 00:19:33.140 05:38:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:33.140 05:38:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:33.141 05:38:36 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:33.141 05:38:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:33.141 05:38:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:33.141 05:38:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:33.141 05:38:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:33.141 05:38:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:33.141 05:38:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:33.141 05:38:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:33.141 05:38:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:33.141 05:38:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:33.141 05:38:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.141 05:38:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:33.399 05:38:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:33.399 "name": "Existed_Raid", 00:19:33.399 "uuid": "3f9b8526-a407-40ac-8647-b4eabae36faa", 00:19:33.399 "strip_size_kb": 0, 00:19:33.399 "state": "configuring", 00:19:33.399 "raid_level": "raid1", 00:19:33.399 "superblock": true, 00:19:33.399 "num_base_bdevs": 4, 00:19:33.399 "num_base_bdevs_discovered": 2, 00:19:33.399 "num_base_bdevs_operational": 4, 00:19:33.399 "base_bdevs_list": [ 00:19:33.399 { 00:19:33.399 "name": "BaseBdev1", 00:19:33.399 "uuid": "4389801a-6b89-4eb0-82c8-3a37f7a24ffe", 00:19:33.399 "is_configured": true, 00:19:33.399 "data_offset": 2048, 00:19:33.399 "data_size": 63488 00:19:33.399 }, 00:19:33.399 { 00:19:33.399 "name": "BaseBdev2", 00:19:33.399 "uuid": "73cdf7b7-1af6-454d-a76d-96686c4ccc77", 00:19:33.399 "is_configured": true, 00:19:33.399 "data_offset": 2048, 00:19:33.399 "data_size": 63488 00:19:33.399 }, 00:19:33.399 { 00:19:33.399 "name": "BaseBdev3", 00:19:33.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.399 "is_configured": false, 00:19:33.399 "data_offset": 0, 00:19:33.399 "data_size": 0 00:19:33.399 }, 00:19:33.399 { 00:19:33.399 "name": "BaseBdev4", 00:19:33.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.399 "is_configured": false, 00:19:33.399 "data_offset": 0, 00:19:33.399 "data_size": 0 00:19:33.399 } 00:19:33.399 ] 00:19:33.399 }' 00:19:33.399 05:38:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:33.399 05:38:37 -- common/autotest_common.sh@10 -- # set +x 00:19:33.966 05:38:37 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:33.966 [2024-10-07 05:38:37.941785] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:33.966 BaseBdev3 00:19:34.226 05:38:37 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:34.226 05:38:37 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:34.226 05:38:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:34.226 05:38:37 -- common/autotest_common.sh@889 -- # local i 00:19:34.226 05:38:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:34.226 05:38:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:34.226 05:38:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:34.226 05:38:38 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:34.485 [ 00:19:34.485 { 00:19:34.485 "name": "BaseBdev3", 00:19:34.485 "aliases": [ 00:19:34.485 "8bdd3093-2fdf-40c9-aa8f-2742026f2d79" 00:19:34.485 ], 00:19:34.485 "product_name": "Malloc disk", 00:19:34.485 "block_size": 512, 00:19:34.485 "num_blocks": 65536, 00:19:34.485 "uuid": "8bdd3093-2fdf-40c9-aa8f-2742026f2d79", 00:19:34.485 "assigned_rate_limits": { 00:19:34.485 "rw_ios_per_sec": 0, 00:19:34.485 "rw_mbytes_per_sec": 0, 00:19:34.485 "r_mbytes_per_sec": 0, 00:19:34.485 "w_mbytes_per_sec": 0 00:19:34.485 }, 00:19:34.485 "claimed": true, 00:19:34.485 "claim_type": "exclusive_write", 00:19:34.485 "zoned": false, 00:19:34.485 "supported_io_types": { 00:19:34.485 "read": true, 00:19:34.485 "write": true, 00:19:34.485 "unmap": true, 00:19:34.485 "write_zeroes": true, 00:19:34.485 "flush": true, 00:19:34.485 "reset": true, 00:19:34.485 "compare": false, 00:19:34.485 "compare_and_write": false, 00:19:34.485 "abort": true, 00:19:34.485 "nvme_admin": false, 00:19:34.485 "nvme_io": false 00:19:34.485 }, 00:19:34.485 "memory_domains": [ 00:19:34.485 { 00:19:34.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.485 "dma_device_type": 2 00:19:34.485 } 00:19:34.485 ], 00:19:34.485 "driver_specific": {} 00:19:34.485 } 00:19:34.485 ] 00:19:34.485 05:38:38 -- common/autotest_common.sh@895 -- # return 0 00:19:34.485 05:38:38 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:34.485 05:38:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:34.485 05:38:38 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:34.485 05:38:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:34.485 05:38:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:34.485 05:38:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:34.485 05:38:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:34.485 05:38:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:34.485 05:38:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:34.485 05:38:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:34.485 05:38:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:34.485 05:38:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:34.485 05:38:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.485 05:38:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.744 05:38:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:34.744 "name": "Existed_Raid", 00:19:34.744 "uuid": "3f9b8526-a407-40ac-8647-b4eabae36faa", 00:19:34.744 "strip_size_kb": 0, 00:19:34.744 "state": "configuring", 00:19:34.744 "raid_level": "raid1", 00:19:34.744 "superblock": true, 00:19:34.744 "num_base_bdevs": 4, 00:19:34.744 "num_base_bdevs_discovered": 3, 00:19:34.744 "num_base_bdevs_operational": 4, 00:19:34.744 "base_bdevs_list": [ 00:19:34.744 { 00:19:34.744 "name": "BaseBdev1", 00:19:34.744 "uuid": "4389801a-6b89-4eb0-82c8-3a37f7a24ffe", 00:19:34.744 "is_configured": true, 00:19:34.744 "data_offset": 2048, 00:19:34.744 "data_size": 63488 00:19:34.744 }, 00:19:34.744 { 00:19:34.744 "name": "BaseBdev2", 00:19:34.744 "uuid": "73cdf7b7-1af6-454d-a76d-96686c4ccc77", 00:19:34.744 "is_configured": true, 00:19:34.744 "data_offset": 2048, 00:19:34.744 "data_size": 63488 00:19:34.744 }, 00:19:34.744 { 00:19:34.744 "name": "BaseBdev3", 00:19:34.744 "uuid": "8bdd3093-2fdf-40c9-aa8f-2742026f2d79", 00:19:34.744 "is_configured": true, 00:19:34.744 "data_offset": 2048, 00:19:34.744 "data_size": 63488 00:19:34.744 }, 00:19:34.744 { 00:19:34.744 "name": "BaseBdev4", 00:19:34.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.744 "is_configured": false, 00:19:34.744 "data_offset": 0, 00:19:34.744 "data_size": 0 00:19:34.744 } 00:19:34.744 ] 00:19:34.744 }' 00:19:34.744 05:38:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:34.744 05:38:38 -- common/autotest_common.sh@10 -- # set +x 00:19:35.311 05:38:39 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:35.570 [2024-10-07 05:38:39.538098] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:35.570 [2024-10-07 05:38:39.538336] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:19:35.570 [2024-10-07 05:38:39.538352] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:35.570 [2024-10-07 05:38:39.538467] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:19:35.570 [2024-10-07 05:38:39.538847] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:19:35.570 [2024-10-07 05:38:39.538871] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:19:35.570 BaseBdev4 00:19:35.570 [2024-10-07 05:38:39.539029] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:35.829 05:38:39 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:35.829 05:38:39 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:19:35.829 05:38:39 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:35.829 05:38:39 -- common/autotest_common.sh@889 -- # local i 00:19:35.829 05:38:39 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:35.829 05:38:39 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:35.829 05:38:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:35.829 05:38:39 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:36.088 [ 00:19:36.088 { 00:19:36.088 "name": "BaseBdev4", 00:19:36.088 "aliases": [ 00:19:36.088 "8953c711-9667-4992-adf6-960a0f5ddf53" 00:19:36.088 ], 00:19:36.088 "product_name": "Malloc disk", 00:19:36.088 "block_size": 512, 00:19:36.088 "num_blocks": 65536, 00:19:36.088 "uuid": "8953c711-9667-4992-adf6-960a0f5ddf53", 00:19:36.088 "assigned_rate_limits": { 00:19:36.088 "rw_ios_per_sec": 0, 00:19:36.088 "rw_mbytes_per_sec": 0, 00:19:36.088 "r_mbytes_per_sec": 0, 00:19:36.088 "w_mbytes_per_sec": 0 00:19:36.088 }, 00:19:36.088 "claimed": true, 00:19:36.088 "claim_type": "exclusive_write", 00:19:36.088 "zoned": false, 00:19:36.088 "supported_io_types": { 00:19:36.088 "read": true, 00:19:36.088 "write": true, 00:19:36.088 "unmap": true, 00:19:36.088 "write_zeroes": true, 00:19:36.088 "flush": true, 00:19:36.088 "reset": true, 00:19:36.088 "compare": false, 00:19:36.088 "compare_and_write": false, 00:19:36.088 "abort": true, 00:19:36.088 "nvme_admin": false, 00:19:36.088 "nvme_io": false 00:19:36.088 }, 00:19:36.088 "memory_domains": [ 00:19:36.088 { 00:19:36.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.088 "dma_device_type": 2 00:19:36.088 } 00:19:36.088 ], 00:19:36.088 "driver_specific": {} 00:19:36.088 } 00:19:36.088 ] 00:19:36.088 05:38:39 -- common/autotest_common.sh@895 -- # return 0 00:19:36.088 05:38:39 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:36.088 05:38:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:36.088 05:38:39 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:36.088 05:38:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:36.088 05:38:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:36.088 05:38:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:36.088 05:38:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:36.088 05:38:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:36.088 05:38:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:36.088 05:38:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:36.088 05:38:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:36.088 05:38:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:36.088 05:38:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.088 05:38:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:36.347 05:38:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:36.347 "name": "Existed_Raid", 00:19:36.347 "uuid": "3f9b8526-a407-40ac-8647-b4eabae36faa", 00:19:36.347 "strip_size_kb": 0, 00:19:36.347 "state": "online", 00:19:36.347 "raid_level": "raid1", 00:19:36.347 "superblock": true, 00:19:36.347 "num_base_bdevs": 4, 00:19:36.347 "num_base_bdevs_discovered": 4, 00:19:36.347 "num_base_bdevs_operational": 4, 00:19:36.347 "base_bdevs_list": [ 00:19:36.347 { 00:19:36.347 "name": "BaseBdev1", 00:19:36.347 "uuid": "4389801a-6b89-4eb0-82c8-3a37f7a24ffe", 00:19:36.347 "is_configured": true, 00:19:36.347 "data_offset": 2048, 00:19:36.347 "data_size": 63488 00:19:36.347 }, 00:19:36.347 { 00:19:36.347 "name": "BaseBdev2", 00:19:36.347 "uuid": "73cdf7b7-1af6-454d-a76d-96686c4ccc77", 00:19:36.347 "is_configured": true, 00:19:36.347 "data_offset": 2048, 00:19:36.347 "data_size": 63488 00:19:36.347 }, 00:19:36.347 { 00:19:36.347 "name": "BaseBdev3", 00:19:36.347 "uuid": "8bdd3093-2fdf-40c9-aa8f-2742026f2d79", 00:19:36.347 "is_configured": true, 00:19:36.347 "data_offset": 2048, 00:19:36.347 "data_size": 63488 00:19:36.347 }, 00:19:36.347 { 00:19:36.347 "name": "BaseBdev4", 00:19:36.347 "uuid": "8953c711-9667-4992-adf6-960a0f5ddf53", 00:19:36.347 "is_configured": true, 00:19:36.347 "data_offset": 2048, 00:19:36.347 "data_size": 63488 00:19:36.347 } 00:19:36.347 ] 00:19:36.347 }' 00:19:36.347 05:38:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:36.347 05:38:40 -- common/autotest_common.sh@10 -- # set +x 00:19:36.914 05:38:40 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:37.172 [2024-10-07 05:38:41.094457] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:37.431 05:38:41 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:37.431 05:38:41 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:19:37.431 05:38:41 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:37.431 05:38:41 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:37.431 05:38:41 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:19:37.431 05:38:41 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:37.431 05:38:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:37.431 05:38:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:37.431 05:38:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:37.431 05:38:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:37.431 05:38:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:37.431 05:38:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:37.431 05:38:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:37.431 05:38:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:37.431 05:38:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:37.431 05:38:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.431 05:38:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:37.431 05:38:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:37.431 "name": "Existed_Raid", 00:19:37.431 "uuid": "3f9b8526-a407-40ac-8647-b4eabae36faa", 00:19:37.431 "strip_size_kb": 0, 00:19:37.431 "state": "online", 00:19:37.431 "raid_level": "raid1", 00:19:37.431 "superblock": true, 00:19:37.431 "num_base_bdevs": 4, 00:19:37.431 "num_base_bdevs_discovered": 3, 00:19:37.431 "num_base_bdevs_operational": 3, 00:19:37.431 "base_bdevs_list": [ 00:19:37.431 { 00:19:37.431 "name": null, 00:19:37.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.431 "is_configured": false, 00:19:37.431 "data_offset": 2048, 00:19:37.431 "data_size": 63488 00:19:37.431 }, 00:19:37.431 { 00:19:37.431 "name": "BaseBdev2", 00:19:37.431 "uuid": "73cdf7b7-1af6-454d-a76d-96686c4ccc77", 00:19:37.431 "is_configured": true, 00:19:37.431 "data_offset": 2048, 00:19:37.431 "data_size": 63488 00:19:37.431 }, 00:19:37.431 { 00:19:37.431 "name": "BaseBdev3", 00:19:37.431 "uuid": "8bdd3093-2fdf-40c9-aa8f-2742026f2d79", 00:19:37.431 "is_configured": true, 00:19:37.431 "data_offset": 2048, 00:19:37.431 "data_size": 63488 00:19:37.431 }, 00:19:37.431 { 00:19:37.431 "name": "BaseBdev4", 00:19:37.431 "uuid": "8953c711-9667-4992-adf6-960a0f5ddf53", 00:19:37.431 "is_configured": true, 00:19:37.431 "data_offset": 2048, 00:19:37.431 "data_size": 63488 00:19:37.431 } 00:19:37.431 ] 00:19:37.431 }' 00:19:37.431 05:38:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:37.431 05:38:41 -- common/autotest_common.sh@10 -- # set +x 00:19:38.366 05:38:42 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:38.366 05:38:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:38.366 05:38:42 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.366 05:38:42 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:38.366 05:38:42 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:38.366 05:38:42 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:38.366 05:38:42 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:38.624 [2024-10-07 05:38:42.470652] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:38.624 05:38:42 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:38.624 05:38:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:38.624 05:38:42 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.624 05:38:42 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:38.881 05:38:42 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:38.881 05:38:42 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:38.881 05:38:42 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:39.260 [2024-10-07 05:38:43.054984] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:39.260 05:38:43 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:39.260 05:38:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:39.260 05:38:43 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.260 05:38:43 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:39.517 05:38:43 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:39.517 05:38:43 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:39.517 05:38:43 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:39.776 [2024-10-07 05:38:43.650218] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:39.776 [2024-10-07 05:38:43.650256] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:39.776 [2024-10-07 05:38:43.650345] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:39.776 [2024-10-07 05:38:43.721866] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:39.776 [2024-10-07 05:38:43.721904] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:19:39.776 05:38:43 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:39.776 05:38:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:39.776 05:38:43 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.776 05:38:43 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:40.035 05:38:43 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:40.035 05:38:43 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:40.035 05:38:43 -- bdev/bdev_raid.sh@287 -- # killprocess 155634 00:19:40.035 05:38:43 -- common/autotest_common.sh@926 -- # '[' -z 155634 ']' 00:19:40.035 05:38:43 -- common/autotest_common.sh@930 -- # kill -0 155634 00:19:40.035 05:38:43 -- common/autotest_common.sh@931 -- # uname 00:19:40.035 05:38:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:40.035 05:38:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 155634 00:19:40.035 killing process with pid 155634 00:19:40.035 05:38:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:40.035 05:38:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:40.035 05:38:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 155634' 00:19:40.035 05:38:43 -- common/autotest_common.sh@945 -- # kill 155634 00:19:40.035 05:38:43 -- common/autotest_common.sh@950 -- # wait 155634 00:19:40.035 [2024-10-07 05:38:43.952981] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:40.035 [2024-10-07 05:38:43.953096] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:41.411 ************************************ 00:19:41.411 END TEST raid_state_function_test_sb 00:19:41.411 ************************************ 00:19:41.411 05:38:45 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:41.411 00:19:41.411 real 0m15.538s 00:19:41.411 user 0m27.685s 00:19:41.411 sys 0m1.854s 00:19:41.411 05:38:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:41.411 05:38:45 -- common/autotest_common.sh@10 -- # set +x 00:19:41.411 05:38:45 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:19:41.411 05:38:45 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:41.411 05:38:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:41.411 05:38:45 -- common/autotest_common.sh@10 -- # set +x 00:19:41.411 ************************************ 00:19:41.411 START TEST raid_superblock_test 00:19:41.411 ************************************ 00:19:41.411 05:38:45 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 4 00:19:41.411 05:38:45 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:19:41.411 05:38:45 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:19:41.411 05:38:45 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:19:41.411 05:38:45 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:19:41.411 05:38:45 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:19:41.411 05:38:45 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:19:41.411 05:38:45 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:19:41.411 05:38:45 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:19:41.411 05:38:45 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:19:41.411 05:38:45 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:19:41.411 05:38:45 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:19:41.411 05:38:45 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:19:41.411 05:38:45 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:19:41.411 05:38:45 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:19:41.411 05:38:45 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:19:41.411 05:38:45 -- bdev/bdev_raid.sh@357 -- # raid_pid=156674 00:19:41.411 05:38:45 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:41.411 05:38:45 -- bdev/bdev_raid.sh@358 -- # waitforlisten 156674 /var/tmp/spdk-raid.sock 00:19:41.411 05:38:45 -- common/autotest_common.sh@819 -- # '[' -z 156674 ']' 00:19:41.411 05:38:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:41.411 05:38:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:41.411 05:38:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:41.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:41.411 05:38:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:41.411 05:38:45 -- common/autotest_common.sh@10 -- # set +x 00:19:41.411 [2024-10-07 05:38:45.120487] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:41.412 [2024-10-07 05:38:45.120680] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156674 ] 00:19:41.412 [2024-10-07 05:38:45.269306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.671 [2024-10-07 05:38:45.463003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.671 [2024-10-07 05:38:45.649454] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:42.240 05:38:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:42.240 05:38:46 -- common/autotest_common.sh@852 -- # return 0 00:19:42.240 05:38:46 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:19:42.240 05:38:46 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:42.240 05:38:46 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:19:42.240 05:38:46 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:19:42.240 05:38:46 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:42.240 05:38:46 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:42.240 05:38:46 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:42.240 05:38:46 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:42.240 05:38:46 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:42.499 malloc1 00:19:42.499 05:38:46 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:42.758 [2024-10-07 05:38:46.546208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:42.758 [2024-10-07 05:38:46.546371] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.758 [2024-10-07 05:38:46.546419] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:19:42.758 [2024-10-07 05:38:46.546476] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.758 [2024-10-07 05:38:46.549321] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.758 [2024-10-07 05:38:46.549398] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:42.758 pt1 00:19:42.758 05:38:46 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:42.758 05:38:46 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:42.758 05:38:46 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:19:42.758 05:38:46 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:19:42.758 05:38:46 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:42.758 05:38:46 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:42.758 05:38:46 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:42.758 05:38:46 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:42.758 05:38:46 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:43.017 malloc2 00:19:43.017 05:38:46 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:43.276 [2024-10-07 05:38:47.065874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:43.276 [2024-10-07 05:38:47.066026] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.276 [2024-10-07 05:38:47.066082] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:43.276 [2024-10-07 05:38:47.066158] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.276 [2024-10-07 05:38:47.068857] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.276 [2024-10-07 05:38:47.068945] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:43.276 pt2 00:19:43.276 05:38:47 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:43.276 05:38:47 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:43.276 05:38:47 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:19:43.276 05:38:47 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:19:43.276 05:38:47 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:43.276 05:38:47 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:43.276 05:38:47 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:43.276 05:38:47 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:43.276 05:38:47 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:43.535 malloc3 00:19:43.535 05:38:47 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:43.794 [2024-10-07 05:38:47.569606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:43.794 [2024-10-07 05:38:47.569707] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.794 [2024-10-07 05:38:47.569759] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:19:43.794 [2024-10-07 05:38:47.569808] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.794 [2024-10-07 05:38:47.572420] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.794 [2024-10-07 05:38:47.572496] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:43.794 pt3 00:19:43.794 05:38:47 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:43.794 05:38:47 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:43.794 05:38:47 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:19:43.794 05:38:47 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:19:43.794 05:38:47 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:43.794 05:38:47 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:43.794 05:38:47 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:43.794 05:38:47 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:43.794 05:38:47 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:19:44.053 malloc4 00:19:44.053 05:38:47 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:44.311 [2024-10-07 05:38:48.136697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:44.311 [2024-10-07 05:38:48.136840] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:44.311 [2024-10-07 05:38:48.136885] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:19:44.311 [2024-10-07 05:38:48.136936] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:44.311 [2024-10-07 05:38:48.139392] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:44.311 [2024-10-07 05:38:48.139462] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:44.311 pt4 00:19:44.311 05:38:48 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:44.311 05:38:48 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:44.311 05:38:48 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:19:44.570 [2024-10-07 05:38:48.400796] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:44.570 [2024-10-07 05:38:48.402680] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:44.570 [2024-10-07 05:38:48.402772] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:44.570 [2024-10-07 05:38:48.402838] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:44.570 [2024-10-07 05:38:48.403125] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:19:44.570 [2024-10-07 05:38:48.403151] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:44.570 [2024-10-07 05:38:48.403315] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:19:44.570 [2024-10-07 05:38:48.403804] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:19:44.570 [2024-10-07 05:38:48.403844] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:19:44.570 [2024-10-07 05:38:48.404089] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:44.570 05:38:48 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:44.570 05:38:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:44.570 05:38:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:44.570 05:38:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:44.570 05:38:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:44.570 05:38:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:44.570 05:38:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:44.570 05:38:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:44.570 05:38:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:44.570 05:38:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:44.570 05:38:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.570 05:38:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.829 05:38:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:44.829 "name": "raid_bdev1", 00:19:44.829 "uuid": "5b5c27fe-c09b-4a7c-ba17-1f3c49afb720", 00:19:44.829 "strip_size_kb": 0, 00:19:44.829 "state": "online", 00:19:44.829 "raid_level": "raid1", 00:19:44.829 "superblock": true, 00:19:44.829 "num_base_bdevs": 4, 00:19:44.829 "num_base_bdevs_discovered": 4, 00:19:44.829 "num_base_bdevs_operational": 4, 00:19:44.829 "base_bdevs_list": [ 00:19:44.829 { 00:19:44.829 "name": "pt1", 00:19:44.829 "uuid": "b105d13c-505c-5248-bf71-6064ef4517a0", 00:19:44.829 "is_configured": true, 00:19:44.829 "data_offset": 2048, 00:19:44.829 "data_size": 63488 00:19:44.829 }, 00:19:44.829 { 00:19:44.829 "name": "pt2", 00:19:44.829 "uuid": "6b4cc5f8-02db-5be5-a76c-f034856deb33", 00:19:44.829 "is_configured": true, 00:19:44.829 "data_offset": 2048, 00:19:44.829 "data_size": 63488 00:19:44.829 }, 00:19:44.829 { 00:19:44.829 "name": "pt3", 00:19:44.829 "uuid": "193b5c5f-862a-5411-b526-f452969a071f", 00:19:44.829 "is_configured": true, 00:19:44.829 "data_offset": 2048, 00:19:44.829 "data_size": 63488 00:19:44.829 }, 00:19:44.829 { 00:19:44.829 "name": "pt4", 00:19:44.829 "uuid": "36febb24-1072-5203-a6f2-afa44e8e66d0", 00:19:44.829 "is_configured": true, 00:19:44.829 "data_offset": 2048, 00:19:44.829 "data_size": 63488 00:19:44.829 } 00:19:44.829 ] 00:19:44.829 }' 00:19:44.829 05:38:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:44.829 05:38:48 -- common/autotest_common.sh@10 -- # set +x 00:19:45.398 05:38:49 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:45.398 05:38:49 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:19:45.658 [2024-10-07 05:38:49.549147] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:45.658 05:38:49 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=5b5c27fe-c09b-4a7c-ba17-1f3c49afb720 00:19:45.658 05:38:49 -- bdev/bdev_raid.sh@380 -- # '[' -z 5b5c27fe-c09b-4a7c-ba17-1f3c49afb720 ']' 00:19:45.658 05:38:49 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:45.917 [2024-10-07 05:38:49.824977] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:45.917 [2024-10-07 05:38:49.825039] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:45.917 [2024-10-07 05:38:49.825140] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:45.917 [2024-10-07 05:38:49.825257] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:45.917 [2024-10-07 05:38:49.825272] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:19:45.917 05:38:49 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.917 05:38:49 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:19:46.176 05:38:50 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:19:46.176 05:38:50 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:19:46.176 05:38:50 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:46.176 05:38:50 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:46.435 05:38:50 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:46.435 05:38:50 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:46.693 05:38:50 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:46.693 05:38:50 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:46.952 05:38:50 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:46.952 05:38:50 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:47.211 05:38:50 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:47.211 05:38:50 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:47.470 05:38:51 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:19:47.470 05:38:51 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:47.470 05:38:51 -- common/autotest_common.sh@640 -- # local es=0 00:19:47.470 05:38:51 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:47.470 05:38:51 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:47.470 05:38:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:47.470 05:38:51 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:47.470 05:38:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:47.470 05:38:51 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:47.470 05:38:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:47.470 05:38:51 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:47.470 05:38:51 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:47.470 05:38:51 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:47.730 [2024-10-07 05:38:51.453125] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:47.730 [2024-10-07 05:38:51.454806] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:47.730 [2024-10-07 05:38:51.454881] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:47.730 [2024-10-07 05:38:51.454949] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:47.730 [2024-10-07 05:38:51.455016] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:19:47.730 request: 00:19:47.730 { 00:19:47.730 "name": "raid_bdev1", 00:19:47.730 "raid_level": "raid1", 00:19:47.730 "base_bdevs": [ 00:19:47.730 "malloc1", 00:19:47.730 "malloc2", 00:19:47.730 "malloc3", 00:19:47.730 "malloc4" 00:19:47.730 ], 00:19:47.730 "superblock": false, 00:19:47.730 "method": "bdev_raid_create", 00:19:47.730 "req_id": 1 00:19:47.730 } 00:19:47.730 Got JSON-RPC error response 00:19:47.730 response: 00:19:47.730 { 00:19:47.730 "code": -17, 00:19:47.730 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:47.730 } 00:19:47.730 [2024-10-07 05:38:51.455101] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:19:47.730 [2024-10-07 05:38:51.455152] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:19:47.730 [2024-10-07 05:38:51.455228] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:19:47.730 [2024-10-07 05:38:51.455262] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:47.730 [2024-10-07 05:38:51.455276] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring 00:19:47.730 05:38:51 -- common/autotest_common.sh@643 -- # es=1 00:19:47.730 05:38:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:47.730 05:38:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:47.730 05:38:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:47.730 05:38:51 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.730 05:38:51 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:19:47.989 05:38:51 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:19:47.989 05:38:51 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:19:47.989 05:38:51 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:47.989 [2024-10-07 05:38:51.945151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:47.989 [2024-10-07 05:38:51.945248] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:47.989 [2024-10-07 05:38:51.945287] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:47.989 [2024-10-07 05:38:51.945324] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:47.989 [2024-10-07 05:38:51.947547] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:47.989 [2024-10-07 05:38:51.947632] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:47.989 [2024-10-07 05:38:51.947741] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:47.989 [2024-10-07 05:38:51.947835] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:47.989 pt1 00:19:47.989 05:38:51 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:47.989 05:38:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:47.989 05:38:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:47.989 05:38:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:47.989 05:38:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:47.989 05:38:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:47.989 05:38:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:47.989 05:38:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:47.989 05:38:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:47.989 05:38:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:47.989 05:38:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.989 05:38:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.248 05:38:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:48.248 "name": "raid_bdev1", 00:19:48.248 "uuid": "5b5c27fe-c09b-4a7c-ba17-1f3c49afb720", 00:19:48.248 "strip_size_kb": 0, 00:19:48.248 "state": "configuring", 00:19:48.248 "raid_level": "raid1", 00:19:48.248 "superblock": true, 00:19:48.248 "num_base_bdevs": 4, 00:19:48.248 "num_base_bdevs_discovered": 1, 00:19:48.248 "num_base_bdevs_operational": 4, 00:19:48.248 "base_bdevs_list": [ 00:19:48.248 { 00:19:48.248 "name": "pt1", 00:19:48.248 "uuid": "b105d13c-505c-5248-bf71-6064ef4517a0", 00:19:48.248 "is_configured": true, 00:19:48.248 "data_offset": 2048, 00:19:48.248 "data_size": 63488 00:19:48.248 }, 00:19:48.248 { 00:19:48.248 "name": null, 00:19:48.248 "uuid": "6b4cc5f8-02db-5be5-a76c-f034856deb33", 00:19:48.248 "is_configured": false, 00:19:48.248 "data_offset": 2048, 00:19:48.248 "data_size": 63488 00:19:48.248 }, 00:19:48.248 { 00:19:48.248 "name": null, 00:19:48.248 "uuid": "193b5c5f-862a-5411-b526-f452969a071f", 00:19:48.248 "is_configured": false, 00:19:48.248 "data_offset": 2048, 00:19:48.248 "data_size": 63488 00:19:48.248 }, 00:19:48.248 { 00:19:48.248 "name": null, 00:19:48.248 "uuid": "36febb24-1072-5203-a6f2-afa44e8e66d0", 00:19:48.248 "is_configured": false, 00:19:48.248 "data_offset": 2048, 00:19:48.248 "data_size": 63488 00:19:48.248 } 00:19:48.248 ] 00:19:48.248 }' 00:19:48.248 05:38:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:48.248 05:38:52 -- common/autotest_common.sh@10 -- # set +x 00:19:49.212 05:38:52 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:19:49.213 05:38:52 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:49.213 [2024-10-07 05:38:52.989328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:49.213 [2024-10-07 05:38:52.989398] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:49.213 [2024-10-07 05:38:52.989443] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:49.213 [2024-10-07 05:38:52.989469] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:49.213 [2024-10-07 05:38:52.989909] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:49.213 [2024-10-07 05:38:52.989966] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:49.213 [2024-10-07 05:38:52.990064] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:49.213 [2024-10-07 05:38:52.990100] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:49.213 pt2 00:19:49.213 05:38:53 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:49.213 [2024-10-07 05:38:53.181373] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:49.472 05:38:53 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:49.472 05:38:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:49.472 05:38:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:49.472 05:38:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:49.472 05:38:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:49.472 05:38:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:49.472 05:38:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:49.472 05:38:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:49.472 05:38:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:49.472 05:38:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:49.472 05:38:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.472 05:38:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.472 05:38:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:49.472 "name": "raid_bdev1", 00:19:49.472 "uuid": "5b5c27fe-c09b-4a7c-ba17-1f3c49afb720", 00:19:49.472 "strip_size_kb": 0, 00:19:49.472 "state": "configuring", 00:19:49.472 "raid_level": "raid1", 00:19:49.472 "superblock": true, 00:19:49.472 "num_base_bdevs": 4, 00:19:49.472 "num_base_bdevs_discovered": 1, 00:19:49.472 "num_base_bdevs_operational": 4, 00:19:49.472 "base_bdevs_list": [ 00:19:49.472 { 00:19:49.472 "name": "pt1", 00:19:49.472 "uuid": "b105d13c-505c-5248-bf71-6064ef4517a0", 00:19:49.472 "is_configured": true, 00:19:49.472 "data_offset": 2048, 00:19:49.472 "data_size": 63488 00:19:49.472 }, 00:19:49.472 { 00:19:49.472 "name": null, 00:19:49.472 "uuid": "6b4cc5f8-02db-5be5-a76c-f034856deb33", 00:19:49.472 "is_configured": false, 00:19:49.472 "data_offset": 2048, 00:19:49.472 "data_size": 63488 00:19:49.472 }, 00:19:49.472 { 00:19:49.472 "name": null, 00:19:49.472 "uuid": "193b5c5f-862a-5411-b526-f452969a071f", 00:19:49.472 "is_configured": false, 00:19:49.472 "data_offset": 2048, 00:19:49.472 "data_size": 63488 00:19:49.472 }, 00:19:49.472 { 00:19:49.472 "name": null, 00:19:49.472 "uuid": "36febb24-1072-5203-a6f2-afa44e8e66d0", 00:19:49.472 "is_configured": false, 00:19:49.472 "data_offset": 2048, 00:19:49.472 "data_size": 63488 00:19:49.472 } 00:19:49.472 ] 00:19:49.472 }' 00:19:49.472 05:38:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:49.472 05:38:53 -- common/autotest_common.sh@10 -- # set +x 00:19:50.038 05:38:53 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:19:50.038 05:38:53 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:50.038 05:38:53 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:50.295 [2024-10-07 05:38:54.233672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:50.295 [2024-10-07 05:38:54.233799] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:50.295 [2024-10-07 05:38:54.233851] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:50.295 [2024-10-07 05:38:54.233879] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:50.295 [2024-10-07 05:38:54.234513] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:50.295 [2024-10-07 05:38:54.234607] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:50.296 [2024-10-07 05:38:54.234744] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:50.296 [2024-10-07 05:38:54.234772] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:50.296 pt2 00:19:50.296 05:38:54 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:50.296 05:38:54 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:50.296 05:38:54 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:50.554 [2024-10-07 05:38:54.421641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:50.554 [2024-10-07 05:38:54.421733] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:50.554 [2024-10-07 05:38:54.421770] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:50.554 [2024-10-07 05:38:54.421802] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:50.554 [2024-10-07 05:38:54.422240] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:50.554 [2024-10-07 05:38:54.422310] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:50.554 [2024-10-07 05:38:54.422426] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:50.554 [2024-10-07 05:38:54.422450] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:50.554 pt3 00:19:50.554 05:38:54 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:50.554 05:38:54 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:50.554 05:38:54 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:50.813 [2024-10-07 05:38:54.685718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:50.813 [2024-10-07 05:38:54.685814] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:50.813 [2024-10-07 05:38:54.685851] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:50.813 [2024-10-07 05:38:54.685880] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:50.814 [2024-10-07 05:38:54.686362] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:50.814 [2024-10-07 05:38:54.686432] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:50.814 [2024-10-07 05:38:54.686625] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:50.814 [2024-10-07 05:38:54.686653] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:50.814 [2024-10-07 05:38:54.686822] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:19:50.814 [2024-10-07 05:38:54.686842] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:50.814 [2024-10-07 05:38:54.686973] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:50.814 [2024-10-07 05:38:54.687354] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:19:50.814 [2024-10-07 05:38:54.687380] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:19:50.814 [2024-10-07 05:38:54.687578] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:50.814 pt4 00:19:50.814 05:38:54 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:50.814 05:38:54 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:50.814 05:38:54 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:50.814 05:38:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:50.814 05:38:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:50.814 05:38:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:50.814 05:38:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:50.814 05:38:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:50.814 05:38:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:50.814 05:38:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:50.814 05:38:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:50.814 05:38:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:50.814 05:38:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.814 05:38:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.073 05:38:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:51.073 "name": "raid_bdev1", 00:19:51.073 "uuid": "5b5c27fe-c09b-4a7c-ba17-1f3c49afb720", 00:19:51.073 "strip_size_kb": 0, 00:19:51.073 "state": "online", 00:19:51.073 "raid_level": "raid1", 00:19:51.073 "superblock": true, 00:19:51.073 "num_base_bdevs": 4, 00:19:51.073 "num_base_bdevs_discovered": 4, 00:19:51.073 "num_base_bdevs_operational": 4, 00:19:51.073 "base_bdevs_list": [ 00:19:51.073 { 00:19:51.073 "name": "pt1", 00:19:51.073 "uuid": "b105d13c-505c-5248-bf71-6064ef4517a0", 00:19:51.073 "is_configured": true, 00:19:51.073 "data_offset": 2048, 00:19:51.073 "data_size": 63488 00:19:51.073 }, 00:19:51.073 { 00:19:51.073 "name": "pt2", 00:19:51.073 "uuid": "6b4cc5f8-02db-5be5-a76c-f034856deb33", 00:19:51.073 "is_configured": true, 00:19:51.073 "data_offset": 2048, 00:19:51.073 "data_size": 63488 00:19:51.073 }, 00:19:51.073 { 00:19:51.073 "name": "pt3", 00:19:51.073 "uuid": "193b5c5f-862a-5411-b526-f452969a071f", 00:19:51.073 "is_configured": true, 00:19:51.073 "data_offset": 2048, 00:19:51.073 "data_size": 63488 00:19:51.073 }, 00:19:51.073 { 00:19:51.073 "name": "pt4", 00:19:51.073 "uuid": "36febb24-1072-5203-a6f2-afa44e8e66d0", 00:19:51.073 "is_configured": true, 00:19:51.073 "data_offset": 2048, 00:19:51.073 "data_size": 63488 00:19:51.073 } 00:19:51.073 ] 00:19:51.073 }' 00:19:51.073 05:38:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:51.073 05:38:54 -- common/autotest_common.sh@10 -- # set +x 00:19:51.641 05:38:55 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:51.641 05:38:55 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:19:51.901 [2024-10-07 05:38:55.798153] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:51.901 05:38:55 -- bdev/bdev_raid.sh@430 -- # '[' 5b5c27fe-c09b-4a7c-ba17-1f3c49afb720 '!=' 5b5c27fe-c09b-4a7c-ba17-1f3c49afb720 ']' 00:19:51.901 05:38:55 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:19:51.901 05:38:55 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:51.901 05:38:55 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:51.901 05:38:55 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:52.160 [2024-10-07 05:38:56.074064] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:52.160 05:38:56 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:52.160 05:38:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:52.160 05:38:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:52.160 05:38:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:52.160 05:38:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:52.160 05:38:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:52.160 05:38:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:52.160 05:38:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:52.160 05:38:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:52.160 05:38:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:52.160 05:38:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.160 05:38:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.419 05:38:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:52.419 "name": "raid_bdev1", 00:19:52.419 "uuid": "5b5c27fe-c09b-4a7c-ba17-1f3c49afb720", 00:19:52.419 "strip_size_kb": 0, 00:19:52.419 "state": "online", 00:19:52.419 "raid_level": "raid1", 00:19:52.419 "superblock": true, 00:19:52.419 "num_base_bdevs": 4, 00:19:52.419 "num_base_bdevs_discovered": 3, 00:19:52.419 "num_base_bdevs_operational": 3, 00:19:52.419 "base_bdevs_list": [ 00:19:52.419 { 00:19:52.419 "name": null, 00:19:52.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.419 "is_configured": false, 00:19:52.419 "data_offset": 2048, 00:19:52.419 "data_size": 63488 00:19:52.419 }, 00:19:52.419 { 00:19:52.419 "name": "pt2", 00:19:52.419 "uuid": "6b4cc5f8-02db-5be5-a76c-f034856deb33", 00:19:52.419 "is_configured": true, 00:19:52.419 "data_offset": 2048, 00:19:52.419 "data_size": 63488 00:19:52.419 }, 00:19:52.419 { 00:19:52.419 "name": "pt3", 00:19:52.419 "uuid": "193b5c5f-862a-5411-b526-f452969a071f", 00:19:52.419 "is_configured": true, 00:19:52.419 "data_offset": 2048, 00:19:52.419 "data_size": 63488 00:19:52.419 }, 00:19:52.419 { 00:19:52.419 "name": "pt4", 00:19:52.419 "uuid": "36febb24-1072-5203-a6f2-afa44e8e66d0", 00:19:52.419 "is_configured": true, 00:19:52.419 "data_offset": 2048, 00:19:52.419 "data_size": 63488 00:19:52.419 } 00:19:52.419 ] 00:19:52.419 }' 00:19:52.419 05:38:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:52.419 05:38:56 -- common/autotest_common.sh@10 -- # set +x 00:19:53.356 05:38:57 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:53.356 [2024-10-07 05:38:57.298317] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:53.356 [2024-10-07 05:38:57.298355] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:53.356 [2024-10-07 05:38:57.298461] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:53.356 [2024-10-07 05:38:57.298631] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:53.356 [2024-10-07 05:38:57.298646] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:19:53.356 05:38:57 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.356 05:38:57 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:19:53.616 05:38:57 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:19:53.616 05:38:57 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:19:53.616 05:38:57 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:19:53.616 05:38:57 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:53.616 05:38:57 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:53.875 05:38:57 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:19:53.875 05:38:57 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:53.875 05:38:57 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:54.134 05:38:58 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:19:54.134 05:38:58 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:54.134 05:38:58 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:54.701 05:38:58 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:19:54.701 05:38:58 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:54.701 05:38:58 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:19:54.701 05:38:58 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:19:54.701 05:38:58 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:54.701 [2024-10-07 05:38:58.622888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:54.701 [2024-10-07 05:38:58.622961] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.701 [2024-10-07 05:38:58.622995] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:54.701 [2024-10-07 05:38:58.623029] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.701 [2024-10-07 05:38:58.625341] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.701 [2024-10-07 05:38:58.625405] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:54.701 [2024-10-07 05:38:58.625512] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:54.701 [2024-10-07 05:38:58.625561] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:54.701 pt2 00:19:54.701 05:38:58 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:54.701 05:38:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:54.701 05:38:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:54.701 05:38:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:54.701 05:38:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:54.701 05:38:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:54.701 05:38:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:54.701 05:38:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:54.701 05:38:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:54.701 05:38:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:54.701 05:38:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.701 05:38:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.960 05:38:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:54.960 "name": "raid_bdev1", 00:19:54.960 "uuid": "5b5c27fe-c09b-4a7c-ba17-1f3c49afb720", 00:19:54.960 "strip_size_kb": 0, 00:19:54.960 "state": "configuring", 00:19:54.960 "raid_level": "raid1", 00:19:54.960 "superblock": true, 00:19:54.960 "num_base_bdevs": 4, 00:19:54.960 "num_base_bdevs_discovered": 1, 00:19:54.960 "num_base_bdevs_operational": 3, 00:19:54.960 "base_bdevs_list": [ 00:19:54.960 { 00:19:54.960 "name": null, 00:19:54.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.960 "is_configured": false, 00:19:54.960 "data_offset": 2048, 00:19:54.960 "data_size": 63488 00:19:54.960 }, 00:19:54.960 { 00:19:54.960 "name": "pt2", 00:19:54.960 "uuid": "6b4cc5f8-02db-5be5-a76c-f034856deb33", 00:19:54.960 "is_configured": true, 00:19:54.960 "data_offset": 2048, 00:19:54.960 "data_size": 63488 00:19:54.960 }, 00:19:54.960 { 00:19:54.960 "name": null, 00:19:54.960 "uuid": "193b5c5f-862a-5411-b526-f452969a071f", 00:19:54.960 "is_configured": false, 00:19:54.960 "data_offset": 2048, 00:19:54.960 "data_size": 63488 00:19:54.960 }, 00:19:54.960 { 00:19:54.960 "name": null, 00:19:54.960 "uuid": "36febb24-1072-5203-a6f2-afa44e8e66d0", 00:19:54.960 "is_configured": false, 00:19:54.960 "data_offset": 2048, 00:19:54.960 "data_size": 63488 00:19:54.960 } 00:19:54.960 ] 00:19:54.960 }' 00:19:54.960 05:38:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:54.960 05:38:58 -- common/autotest_common.sh@10 -- # set +x 00:19:55.919 05:38:59 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:19:55.919 05:38:59 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:19:55.919 05:38:59 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:55.919 [2024-10-07 05:38:59.831175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:55.919 [2024-10-07 05:38:59.831226] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.919 [2024-10-07 05:38:59.831261] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:55.919 [2024-10-07 05:38:59.831281] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.919 [2024-10-07 05:38:59.831707] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.919 [2024-10-07 05:38:59.831774] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:55.919 [2024-10-07 05:38:59.831868] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:55.919 [2024-10-07 05:38:59.831890] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:55.919 pt3 00:19:55.919 05:38:59 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:55.919 05:38:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:55.919 05:38:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:55.919 05:38:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:55.919 05:38:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:55.919 05:38:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:55.919 05:38:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:55.919 05:38:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:55.919 05:38:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:55.919 05:38:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:55.919 05:38:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.919 05:38:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.178 05:39:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:56.178 "name": "raid_bdev1", 00:19:56.178 "uuid": "5b5c27fe-c09b-4a7c-ba17-1f3c49afb720", 00:19:56.178 "strip_size_kb": 0, 00:19:56.178 "state": "configuring", 00:19:56.178 "raid_level": "raid1", 00:19:56.178 "superblock": true, 00:19:56.178 "num_base_bdevs": 4, 00:19:56.178 "num_base_bdevs_discovered": 2, 00:19:56.178 "num_base_bdevs_operational": 3, 00:19:56.178 "base_bdevs_list": [ 00:19:56.178 { 00:19:56.178 "name": null, 00:19:56.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.178 "is_configured": false, 00:19:56.178 "data_offset": 2048, 00:19:56.178 "data_size": 63488 00:19:56.178 }, 00:19:56.178 { 00:19:56.178 "name": "pt2", 00:19:56.178 "uuid": "6b4cc5f8-02db-5be5-a76c-f034856deb33", 00:19:56.178 "is_configured": true, 00:19:56.178 "data_offset": 2048, 00:19:56.178 "data_size": 63488 00:19:56.178 }, 00:19:56.178 { 00:19:56.178 "name": "pt3", 00:19:56.178 "uuid": "193b5c5f-862a-5411-b526-f452969a071f", 00:19:56.178 "is_configured": true, 00:19:56.178 "data_offset": 2048, 00:19:56.178 "data_size": 63488 00:19:56.178 }, 00:19:56.178 { 00:19:56.178 "name": null, 00:19:56.178 "uuid": "36febb24-1072-5203-a6f2-afa44e8e66d0", 00:19:56.178 "is_configured": false, 00:19:56.178 "data_offset": 2048, 00:19:56.178 "data_size": 63488 00:19:56.178 } 00:19:56.178 ] 00:19:56.178 }' 00:19:56.178 05:39:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:56.178 05:39:00 -- common/autotest_common.sh@10 -- # set +x 00:19:57.114 05:39:00 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:19:57.114 05:39:00 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:19:57.114 05:39:00 -- bdev/bdev_raid.sh@462 -- # i=3 00:19:57.114 05:39:00 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:57.114 [2024-10-07 05:39:01.019438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:57.114 [2024-10-07 05:39:01.019499] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.114 [2024-10-07 05:39:01.019536] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:19:57.114 [2024-10-07 05:39:01.019558] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.114 [2024-10-07 05:39:01.019952] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.114 [2024-10-07 05:39:01.020004] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:57.114 [2024-10-07 05:39:01.020084] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:57.114 [2024-10-07 05:39:01.020106] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:57.114 [2024-10-07 05:39:01.020211] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ba80 00:19:57.114 [2024-10-07 05:39:01.020223] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:57.114 [2024-10-07 05:39:01.020340] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:19:57.114 [2024-10-07 05:39:01.020664] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ba80 00:19:57.114 [2024-10-07 05:39:01.020686] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ba80 00:19:57.114 [2024-10-07 05:39:01.020806] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.114 pt4 00:19:57.114 05:39:01 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:57.114 05:39:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:57.114 05:39:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:57.114 05:39:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:57.114 05:39:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:57.114 05:39:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:57.114 05:39:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:57.114 05:39:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:57.114 05:39:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:57.114 05:39:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:57.114 05:39:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.114 05:39:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.373 05:39:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:57.373 "name": "raid_bdev1", 00:19:57.373 "uuid": "5b5c27fe-c09b-4a7c-ba17-1f3c49afb720", 00:19:57.373 "strip_size_kb": 0, 00:19:57.373 "state": "online", 00:19:57.373 "raid_level": "raid1", 00:19:57.373 "superblock": true, 00:19:57.373 "num_base_bdevs": 4, 00:19:57.373 "num_base_bdevs_discovered": 3, 00:19:57.373 "num_base_bdevs_operational": 3, 00:19:57.373 "base_bdevs_list": [ 00:19:57.373 { 00:19:57.373 "name": null, 00:19:57.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.373 "is_configured": false, 00:19:57.373 "data_offset": 2048, 00:19:57.373 "data_size": 63488 00:19:57.373 }, 00:19:57.373 { 00:19:57.373 "name": "pt2", 00:19:57.373 "uuid": "6b4cc5f8-02db-5be5-a76c-f034856deb33", 00:19:57.373 "is_configured": true, 00:19:57.373 "data_offset": 2048, 00:19:57.373 "data_size": 63488 00:19:57.373 }, 00:19:57.373 { 00:19:57.373 "name": "pt3", 00:19:57.373 "uuid": "193b5c5f-862a-5411-b526-f452969a071f", 00:19:57.373 "is_configured": true, 00:19:57.373 "data_offset": 2048, 00:19:57.373 "data_size": 63488 00:19:57.373 }, 00:19:57.373 { 00:19:57.373 "name": "pt4", 00:19:57.373 "uuid": "36febb24-1072-5203-a6f2-afa44e8e66d0", 00:19:57.373 "is_configured": true, 00:19:57.373 "data_offset": 2048, 00:19:57.373 "data_size": 63488 00:19:57.373 } 00:19:57.373 ] 00:19:57.373 }' 00:19:57.373 05:39:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:57.373 05:39:01 -- common/autotest_common.sh@10 -- # set +x 00:19:58.309 05:39:01 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:19:58.309 05:39:01 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:58.309 [2024-10-07 05:39:02.219608] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:58.309 [2024-10-07 05:39:02.219635] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:58.309 [2024-10-07 05:39:02.219713] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:58.309 [2024-10-07 05:39:02.219795] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:58.309 [2024-10-07 05:39:02.219805] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state offline 00:19:58.309 05:39:02 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.309 05:39:02 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:19:58.568 05:39:02 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:19:58.568 05:39:02 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:19:58.568 05:39:02 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:58.827 [2024-10-07 05:39:02.759736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:58.827 [2024-10-07 05:39:02.759812] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:58.827 [2024-10-07 05:39:02.759849] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:19:58.827 [2024-10-07 05:39:02.759872] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:58.827 [2024-10-07 05:39:02.762045] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:58.827 [2024-10-07 05:39:02.762119] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:58.827 [2024-10-07 05:39:02.762214] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:58.827 [2024-10-07 05:39:02.762259] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:58.827 pt1 00:19:58.827 05:39:02 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:58.827 05:39:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:58.827 05:39:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:58.827 05:39:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:58.827 05:39:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:58.827 05:39:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:58.827 05:39:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:58.827 05:39:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:58.827 05:39:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:58.827 05:39:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:58.827 05:39:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.827 05:39:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.087 05:39:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:59.087 "name": "raid_bdev1", 00:19:59.087 "uuid": "5b5c27fe-c09b-4a7c-ba17-1f3c49afb720", 00:19:59.087 "strip_size_kb": 0, 00:19:59.087 "state": "configuring", 00:19:59.087 "raid_level": "raid1", 00:19:59.087 "superblock": true, 00:19:59.087 "num_base_bdevs": 4, 00:19:59.087 "num_base_bdevs_discovered": 1, 00:19:59.087 "num_base_bdevs_operational": 4, 00:19:59.087 "base_bdevs_list": [ 00:19:59.087 { 00:19:59.087 "name": "pt1", 00:19:59.087 "uuid": "b105d13c-505c-5248-bf71-6064ef4517a0", 00:19:59.087 "is_configured": true, 00:19:59.087 "data_offset": 2048, 00:19:59.087 "data_size": 63488 00:19:59.087 }, 00:19:59.087 { 00:19:59.087 "name": null, 00:19:59.087 "uuid": "6b4cc5f8-02db-5be5-a76c-f034856deb33", 00:19:59.087 "is_configured": false, 00:19:59.087 "data_offset": 2048, 00:19:59.087 "data_size": 63488 00:19:59.087 }, 00:19:59.087 { 00:19:59.087 "name": null, 00:19:59.087 "uuid": "193b5c5f-862a-5411-b526-f452969a071f", 00:19:59.087 "is_configured": false, 00:19:59.087 "data_offset": 2048, 00:19:59.087 "data_size": 63488 00:19:59.087 }, 00:19:59.087 { 00:19:59.087 "name": null, 00:19:59.087 "uuid": "36febb24-1072-5203-a6f2-afa44e8e66d0", 00:19:59.087 "is_configured": false, 00:19:59.087 "data_offset": 2048, 00:19:59.087 "data_size": 63488 00:19:59.087 } 00:19:59.087 ] 00:19:59.087 }' 00:19:59.087 05:39:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:59.087 05:39:03 -- common/autotest_common.sh@10 -- # set +x 00:20:00.024 05:39:03 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:20:00.024 05:39:03 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:00.024 05:39:03 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:00.024 05:39:03 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:00.024 05:39:03 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:00.024 05:39:03 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:00.284 05:39:04 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:00.284 05:39:04 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:00.284 05:39:04 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:00.853 05:39:04 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:00.853 05:39:04 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:00.853 05:39:04 -- bdev/bdev_raid.sh@489 -- # i=3 00:20:00.853 05:39:04 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:00.853 [2024-10-07 05:39:04.784108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:00.853 [2024-10-07 05:39:04.784166] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.853 [2024-10-07 05:39:04.784193] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:20:00.853 [2024-10-07 05:39:04.784217] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.853 [2024-10-07 05:39:04.784557] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.853 [2024-10-07 05:39:04.784615] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:00.853 [2024-10-07 05:39:04.784696] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:20:00.853 [2024-10-07 05:39:04.784710] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:00.854 [2024-10-07 05:39:04.784717] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:00.854 [2024-10-07 05:39:04.784731] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c980 name raid_bdev1, state configuring 00:20:00.854 [2024-10-07 05:39:04.784786] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:00.854 pt4 00:20:00.854 05:39:04 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:00.854 05:39:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:00.854 05:39:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:00.854 05:39:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:00.854 05:39:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:00.854 05:39:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:00.854 05:39:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:00.854 05:39:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:00.854 05:39:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:00.854 05:39:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:00.854 05:39:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.854 05:39:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.113 05:39:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:01.113 "name": "raid_bdev1", 00:20:01.113 "uuid": "5b5c27fe-c09b-4a7c-ba17-1f3c49afb720", 00:20:01.113 "strip_size_kb": 0, 00:20:01.113 "state": "configuring", 00:20:01.113 "raid_level": "raid1", 00:20:01.113 "superblock": true, 00:20:01.113 "num_base_bdevs": 4, 00:20:01.113 "num_base_bdevs_discovered": 1, 00:20:01.113 "num_base_bdevs_operational": 3, 00:20:01.113 "base_bdevs_list": [ 00:20:01.113 { 00:20:01.113 "name": null, 00:20:01.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.114 "is_configured": false, 00:20:01.114 "data_offset": 2048, 00:20:01.114 "data_size": 63488 00:20:01.114 }, 00:20:01.114 { 00:20:01.114 "name": null, 00:20:01.114 "uuid": "6b4cc5f8-02db-5be5-a76c-f034856deb33", 00:20:01.114 "is_configured": false, 00:20:01.114 "data_offset": 2048, 00:20:01.114 "data_size": 63488 00:20:01.114 }, 00:20:01.114 { 00:20:01.114 "name": null, 00:20:01.114 "uuid": "193b5c5f-862a-5411-b526-f452969a071f", 00:20:01.114 "is_configured": false, 00:20:01.114 "data_offset": 2048, 00:20:01.114 "data_size": 63488 00:20:01.114 }, 00:20:01.114 { 00:20:01.114 "name": "pt4", 00:20:01.114 "uuid": "36febb24-1072-5203-a6f2-afa44e8e66d0", 00:20:01.114 "is_configured": true, 00:20:01.114 "data_offset": 2048, 00:20:01.114 "data_size": 63488 00:20:01.114 } 00:20:01.114 ] 00:20:01.114 }' 00:20:01.114 05:39:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:01.114 05:39:05 -- common/autotest_common.sh@10 -- # set +x 00:20:02.050 05:39:05 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:20:02.050 05:39:05 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:02.050 05:39:05 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:02.051 [2024-10-07 05:39:05.936283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:02.051 [2024-10-07 05:39:05.936356] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.051 [2024-10-07 05:39:05.936386] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d280 00:20:02.051 [2024-10-07 05:39:05.936411] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.051 [2024-10-07 05:39:05.936803] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.051 [2024-10-07 05:39:05.936863] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:02.051 [2024-10-07 05:39:05.936952] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:02.051 [2024-10-07 05:39:05.936975] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:02.051 pt2 00:20:02.051 05:39:05 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:20:02.051 05:39:05 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:02.051 05:39:05 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:02.310 [2024-10-07 05:39:06.180329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:02.310 [2024-10-07 05:39:06.180380] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.310 [2024-10-07 05:39:06.180407] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:20:02.310 [2024-10-07 05:39:06.180432] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.310 [2024-10-07 05:39:06.180759] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.310 [2024-10-07 05:39:06.180814] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:02.310 [2024-10-07 05:39:06.180892] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:02.310 [2024-10-07 05:39:06.180913] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:02.310 [2024-10-07 05:39:06.181019] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000cf80 00:20:02.310 [2024-10-07 05:39:06.181030] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:02.310 [2024-10-07 05:39:06.181129] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:20:02.310 [2024-10-07 05:39:06.181417] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000cf80 00:20:02.310 [2024-10-07 05:39:06.181430] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000cf80 00:20:02.310 [2024-10-07 05:39:06.181540] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:02.310 pt3 00:20:02.310 05:39:06 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:20:02.310 05:39:06 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:02.310 05:39:06 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:02.310 05:39:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:02.310 05:39:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:02.310 05:39:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:02.310 05:39:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:02.310 05:39:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:02.310 05:39:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:02.310 05:39:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:02.310 05:39:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:02.310 05:39:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:02.310 05:39:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.310 05:39:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.569 05:39:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:02.569 "name": "raid_bdev1", 00:20:02.569 "uuid": "5b5c27fe-c09b-4a7c-ba17-1f3c49afb720", 00:20:02.569 "strip_size_kb": 0, 00:20:02.569 "state": "online", 00:20:02.569 "raid_level": "raid1", 00:20:02.569 "superblock": true, 00:20:02.569 "num_base_bdevs": 4, 00:20:02.569 "num_base_bdevs_discovered": 3, 00:20:02.569 "num_base_bdevs_operational": 3, 00:20:02.569 "base_bdevs_list": [ 00:20:02.569 { 00:20:02.569 "name": null, 00:20:02.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.569 "is_configured": false, 00:20:02.569 "data_offset": 2048, 00:20:02.569 "data_size": 63488 00:20:02.569 }, 00:20:02.569 { 00:20:02.569 "name": "pt2", 00:20:02.569 "uuid": "6b4cc5f8-02db-5be5-a76c-f034856deb33", 00:20:02.569 "is_configured": true, 00:20:02.569 "data_offset": 2048, 00:20:02.569 "data_size": 63488 00:20:02.569 }, 00:20:02.569 { 00:20:02.569 "name": "pt3", 00:20:02.569 "uuid": "193b5c5f-862a-5411-b526-f452969a071f", 00:20:02.569 "is_configured": true, 00:20:02.569 "data_offset": 2048, 00:20:02.569 "data_size": 63488 00:20:02.569 }, 00:20:02.569 { 00:20:02.569 "name": "pt4", 00:20:02.569 "uuid": "36febb24-1072-5203-a6f2-afa44e8e66d0", 00:20:02.569 "is_configured": true, 00:20:02.569 "data_offset": 2048, 00:20:02.569 "data_size": 63488 00:20:02.569 } 00:20:02.569 ] 00:20:02.569 }' 00:20:02.569 05:39:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:02.569 05:39:06 -- common/autotest_common.sh@10 -- # set +x 00:20:03.574 05:39:07 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:03.574 05:39:07 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:20:03.574 [2024-10-07 05:39:07.392709] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:03.574 05:39:07 -- bdev/bdev_raid.sh@506 -- # '[' 5b5c27fe-c09b-4a7c-ba17-1f3c49afb720 '!=' 5b5c27fe-c09b-4a7c-ba17-1f3c49afb720 ']' 00:20:03.574 05:39:07 -- bdev/bdev_raid.sh@511 -- # killprocess 156674 00:20:03.574 05:39:07 -- common/autotest_common.sh@926 -- # '[' -z 156674 ']' 00:20:03.574 05:39:07 -- common/autotest_common.sh@930 -- # kill -0 156674 00:20:03.574 05:39:07 -- common/autotest_common.sh@931 -- # uname 00:20:03.574 05:39:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:03.574 05:39:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 156674 00:20:03.574 killing process with pid 156674 00:20:03.574 05:39:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:03.574 05:39:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:03.574 05:39:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 156674' 00:20:03.574 05:39:07 -- common/autotest_common.sh@945 -- # kill 156674 00:20:03.574 [2024-10-07 05:39:07.433932] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:03.574 05:39:07 -- common/autotest_common.sh@950 -- # wait 156674 00:20:03.574 [2024-10-07 05:39:07.433995] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:03.574 [2024-10-07 05:39:07.434058] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:03.574 [2024-10-07 05:39:07.434069] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cf80 name raid_bdev1, state offline 00:20:03.833 [2024-10-07 05:39:07.685491] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:04.767 05:39:08 -- bdev/bdev_raid.sh@513 -- # return 0 00:20:04.767 00:20:04.767 real 0m23.535s 00:20:04.767 user 0m43.455s 00:20:04.767 sys 0m2.691s 00:20:04.767 ************************************ 00:20:04.767 END TEST raid_superblock_test 00:20:04.767 ************************************ 00:20:04.767 05:39:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:04.767 05:39:08 -- common/autotest_common.sh@10 -- # set +x 00:20:04.767 05:39:08 -- bdev/bdev_raid.sh@733 -- # '[' true = true ']' 00:20:04.767 05:39:08 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:20:04.767 05:39:08 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false 00:20:04.767 05:39:08 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:04.767 05:39:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:04.767 05:39:08 -- common/autotest_common.sh@10 -- # set +x 00:20:04.767 ************************************ 00:20:04.767 START TEST raid_rebuild_test 00:20:04.767 ************************************ 00:20:04.767 05:39:08 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false false 00:20:04.767 05:39:08 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:04.767 05:39:08 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:04.767 05:39:08 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:20:04.767 05:39:08 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:20:04.767 05:39:08 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:04.767 05:39:08 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:04.767 05:39:08 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:04.767 05:39:08 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:04.767 05:39:08 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:04.767 05:39:08 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:04.767 05:39:08 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:04.767 05:39:08 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:04.767 05:39:08 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:04.767 05:39:08 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:04.767 05:39:08 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:04.767 05:39:08 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:04.767 05:39:08 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:04.767 05:39:08 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:04.767 05:39:08 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:04.767 05:39:08 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:04.767 05:39:08 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:04.767 05:39:08 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:20:04.767 05:39:08 -- bdev/bdev_raid.sh@544 -- # raid_pid=164500 00:20:04.767 05:39:08 -- bdev/bdev_raid.sh@545 -- # waitforlisten 164500 /var/tmp/spdk-raid.sock 00:20:04.767 05:39:08 -- common/autotest_common.sh@819 -- # '[' -z 164500 ']' 00:20:04.767 05:39:08 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:04.767 05:39:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:04.767 05:39:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:04.767 05:39:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:04.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:04.767 05:39:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:04.767 05:39:08 -- common/autotest_common.sh@10 -- # set +x 00:20:04.767 [2024-10-07 05:39:08.729023] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:20:04.767 [2024-10-07 05:39:08.729888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164500 ] 00:20:04.767 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:04.767 Zero copy mechanism will not be used. 00:20:05.026 [2024-10-07 05:39:08.899593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.285 [2024-10-07 05:39:09.191149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.544 [2024-10-07 05:39:09.404149] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:05.803 05:39:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:05.803 05:39:09 -- common/autotest_common.sh@852 -- # return 0 00:20:05.803 05:39:09 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:05.803 05:39:09 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:05.803 05:39:09 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:06.062 BaseBdev1 00:20:06.062 05:39:09 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:06.062 05:39:09 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:06.062 05:39:09 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:06.320 BaseBdev2 00:20:06.320 05:39:10 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:06.579 spare_malloc 00:20:06.579 05:39:10 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:06.838 spare_delay 00:20:06.838 05:39:10 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:07.096 [2024-10-07 05:39:10.853608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:07.096 [2024-10-07 05:39:10.853702] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.096 [2024-10-07 05:39:10.853746] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:20:07.096 [2024-10-07 05:39:10.853797] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.096 [2024-10-07 05:39:10.856163] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.096 [2024-10-07 05:39:10.856218] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:07.096 spare 00:20:07.096 05:39:10 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:07.096 [2024-10-07 05:39:11.049684] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:07.096 [2024-10-07 05:39:11.051361] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:07.096 [2024-10-07 05:39:11.051466] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:20:07.096 [2024-10-07 05:39:11.051480] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:07.096 [2024-10-07 05:39:11.051609] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:20:07.096 [2024-10-07 05:39:11.052020] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:20:07.096 [2024-10-07 05:39:11.052047] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:20:07.096 [2024-10-07 05:39:11.052223] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:07.096 05:39:11 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:07.096 05:39:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:07.096 05:39:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:07.097 05:39:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:07.097 05:39:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:07.097 05:39:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:07.097 05:39:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:07.097 05:39:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:07.097 05:39:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:07.097 05:39:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:07.097 05:39:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.097 05:39:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.356 05:39:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:07.356 "name": "raid_bdev1", 00:20:07.356 "uuid": "a7ac2430-24d0-42f2-9894-1e2f7f2cf01e", 00:20:07.356 "strip_size_kb": 0, 00:20:07.356 "state": "online", 00:20:07.356 "raid_level": "raid1", 00:20:07.356 "superblock": false, 00:20:07.356 "num_base_bdevs": 2, 00:20:07.356 "num_base_bdevs_discovered": 2, 00:20:07.356 "num_base_bdevs_operational": 2, 00:20:07.356 "base_bdevs_list": [ 00:20:07.356 { 00:20:07.356 "name": "BaseBdev1", 00:20:07.356 "uuid": "b290ce5e-054a-4870-8e7a-87e361a84feb", 00:20:07.356 "is_configured": true, 00:20:07.356 "data_offset": 0, 00:20:07.356 "data_size": 65536 00:20:07.356 }, 00:20:07.356 { 00:20:07.356 "name": "BaseBdev2", 00:20:07.356 "uuid": "11ec94b5-2001-4691-9d9d-432eca75a5eb", 00:20:07.356 "is_configured": true, 00:20:07.356 "data_offset": 0, 00:20:07.356 "data_size": 65536 00:20:07.356 } 00:20:07.356 ] 00:20:07.356 }' 00:20:07.356 05:39:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:07.356 05:39:11 -- common/autotest_common.sh@10 -- # set +x 00:20:07.922 05:39:11 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:07.922 05:39:11 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:08.181 [2024-10-07 05:39:12.013961] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:08.181 05:39:12 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:20:08.181 05:39:12 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.181 05:39:12 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:08.439 05:39:12 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:20:08.439 05:39:12 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:20:08.439 05:39:12 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:20:08.439 05:39:12 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:08.439 05:39:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:08.439 05:39:12 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:08.439 05:39:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:08.439 05:39:12 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:08.439 05:39:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:08.439 05:39:12 -- bdev/nbd_common.sh@12 -- # local i 00:20:08.439 05:39:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:08.439 05:39:12 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:08.439 05:39:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:08.699 [2024-10-07 05:39:12.469919] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:20:08.699 /dev/nbd0 00:20:08.699 05:39:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:08.699 05:39:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:08.699 05:39:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:08.699 05:39:12 -- common/autotest_common.sh@857 -- # local i 00:20:08.699 05:39:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:08.699 05:39:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:08.699 05:39:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:08.699 05:39:12 -- common/autotest_common.sh@861 -- # break 00:20:08.699 05:39:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:08.699 05:39:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:08.699 05:39:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:08.699 1+0 records in 00:20:08.699 1+0 records out 00:20:08.699 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396527 s, 10.3 MB/s 00:20:08.699 05:39:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:08.699 05:39:12 -- common/autotest_common.sh@874 -- # size=4096 00:20:08.699 05:39:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:08.699 05:39:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:08.699 05:39:12 -- common/autotest_common.sh@877 -- # return 0 00:20:08.699 05:39:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:08.699 05:39:12 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:08.699 05:39:12 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:20:08.699 05:39:12 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:20:08.699 05:39:12 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:20:13.971 65536+0 records in 00:20:13.971 65536+0 records out 00:20:13.971 33554432 bytes (34 MB, 32 MiB) copied, 5.2526 s, 6.4 MB/s 00:20:13.971 05:39:17 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:13.971 05:39:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:13.971 05:39:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:13.971 05:39:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:13.971 05:39:17 -- bdev/nbd_common.sh@51 -- # local i 00:20:13.971 05:39:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:13.971 05:39:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:14.230 05:39:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:14.230 05:39:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:14.230 05:39:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:14.230 05:39:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:14.230 05:39:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:14.230 05:39:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:14.230 05:39:18 -- bdev/nbd_common.sh@41 -- # break 00:20:14.230 05:39:18 -- bdev/nbd_common.sh@45 -- # return 0 00:20:14.230 [2024-10-07 05:39:18.033518] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:14.230 05:39:18 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:14.489 [2024-10-07 05:39:18.313053] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:14.489 05:39:18 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:14.489 05:39:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:14.489 05:39:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:14.489 05:39:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:14.489 05:39:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:14.489 05:39:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:14.489 05:39:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:14.489 05:39:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:14.489 05:39:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:14.489 05:39:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:14.489 05:39:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.489 05:39:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.747 05:39:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:14.747 "name": "raid_bdev1", 00:20:14.747 "uuid": "a7ac2430-24d0-42f2-9894-1e2f7f2cf01e", 00:20:14.747 "strip_size_kb": 0, 00:20:14.747 "state": "online", 00:20:14.747 "raid_level": "raid1", 00:20:14.747 "superblock": false, 00:20:14.747 "num_base_bdevs": 2, 00:20:14.747 "num_base_bdevs_discovered": 1, 00:20:14.747 "num_base_bdevs_operational": 1, 00:20:14.747 "base_bdevs_list": [ 00:20:14.747 { 00:20:14.747 "name": null, 00:20:14.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.747 "is_configured": false, 00:20:14.747 "data_offset": 0, 00:20:14.747 "data_size": 65536 00:20:14.747 }, 00:20:14.747 { 00:20:14.747 "name": "BaseBdev2", 00:20:14.747 "uuid": "11ec94b5-2001-4691-9d9d-432eca75a5eb", 00:20:14.747 "is_configured": true, 00:20:14.747 "data_offset": 0, 00:20:14.747 "data_size": 65536 00:20:14.747 } 00:20:14.747 ] 00:20:14.747 }' 00:20:14.747 05:39:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:14.747 05:39:18 -- common/autotest_common.sh@10 -- # set +x 00:20:15.315 05:39:19 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:15.573 [2024-10-07 05:39:19.345264] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:15.573 [2024-10-07 05:39:19.345340] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:15.573 [2024-10-07 05:39:19.357991] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09550 00:20:15.573 [2024-10-07 05:39:19.360080] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:15.573 05:39:19 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:16.509 05:39:20 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:16.509 05:39:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:16.509 05:39:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:16.509 05:39:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:16.509 05:39:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:16.509 05:39:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.509 05:39:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.768 05:39:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:16.768 "name": "raid_bdev1", 00:20:16.768 "uuid": "a7ac2430-24d0-42f2-9894-1e2f7f2cf01e", 00:20:16.768 "strip_size_kb": 0, 00:20:16.768 "state": "online", 00:20:16.768 "raid_level": "raid1", 00:20:16.768 "superblock": false, 00:20:16.768 "num_base_bdevs": 2, 00:20:16.768 "num_base_bdevs_discovered": 2, 00:20:16.768 "num_base_bdevs_operational": 2, 00:20:16.768 "process": { 00:20:16.768 "type": "rebuild", 00:20:16.768 "target": "spare", 00:20:16.768 "progress": { 00:20:16.768 "blocks": 24576, 00:20:16.768 "percent": 37 00:20:16.768 } 00:20:16.768 }, 00:20:16.768 "base_bdevs_list": [ 00:20:16.768 { 00:20:16.768 "name": "spare", 00:20:16.768 "uuid": "4110a92a-8e3a-593e-a081-c627ba9a4412", 00:20:16.768 "is_configured": true, 00:20:16.768 "data_offset": 0, 00:20:16.768 "data_size": 65536 00:20:16.768 }, 00:20:16.768 { 00:20:16.768 "name": "BaseBdev2", 00:20:16.768 "uuid": "11ec94b5-2001-4691-9d9d-432eca75a5eb", 00:20:16.768 "is_configured": true, 00:20:16.768 "data_offset": 0, 00:20:16.768 "data_size": 65536 00:20:16.768 } 00:20:16.768 ] 00:20:16.768 }' 00:20:16.768 05:39:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:16.768 05:39:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:16.768 05:39:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:17.026 05:39:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:17.026 05:39:20 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:17.285 [2024-10-07 05:39:21.013888] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:17.285 [2024-10-07 05:39:21.069343] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:17.286 [2024-10-07 05:39:21.069438] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:17.286 05:39:21 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:17.286 05:39:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:17.286 05:39:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:17.286 05:39:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:17.286 05:39:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:17.286 05:39:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:17.286 05:39:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:17.286 05:39:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:17.286 05:39:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:17.286 05:39:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:17.286 05:39:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.286 05:39:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.545 05:39:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:17.545 "name": "raid_bdev1", 00:20:17.545 "uuid": "a7ac2430-24d0-42f2-9894-1e2f7f2cf01e", 00:20:17.545 "strip_size_kb": 0, 00:20:17.545 "state": "online", 00:20:17.545 "raid_level": "raid1", 00:20:17.545 "superblock": false, 00:20:17.545 "num_base_bdevs": 2, 00:20:17.545 "num_base_bdevs_discovered": 1, 00:20:17.545 "num_base_bdevs_operational": 1, 00:20:17.545 "base_bdevs_list": [ 00:20:17.545 { 00:20:17.545 "name": null, 00:20:17.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.545 "is_configured": false, 00:20:17.545 "data_offset": 0, 00:20:17.545 "data_size": 65536 00:20:17.545 }, 00:20:17.545 { 00:20:17.545 "name": "BaseBdev2", 00:20:17.545 "uuid": "11ec94b5-2001-4691-9d9d-432eca75a5eb", 00:20:17.545 "is_configured": true, 00:20:17.545 "data_offset": 0, 00:20:17.545 "data_size": 65536 00:20:17.545 } 00:20:17.545 ] 00:20:17.545 }' 00:20:17.545 05:39:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:17.545 05:39:21 -- common/autotest_common.sh@10 -- # set +x 00:20:18.111 05:39:21 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:18.111 05:39:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:18.111 05:39:21 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:18.111 05:39:21 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:18.111 05:39:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:18.111 05:39:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.111 05:39:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.370 05:39:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:18.370 "name": "raid_bdev1", 00:20:18.370 "uuid": "a7ac2430-24d0-42f2-9894-1e2f7f2cf01e", 00:20:18.370 "strip_size_kb": 0, 00:20:18.370 "state": "online", 00:20:18.370 "raid_level": "raid1", 00:20:18.370 "superblock": false, 00:20:18.370 "num_base_bdevs": 2, 00:20:18.370 "num_base_bdevs_discovered": 1, 00:20:18.370 "num_base_bdevs_operational": 1, 00:20:18.370 "base_bdevs_list": [ 00:20:18.370 { 00:20:18.370 "name": null, 00:20:18.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.370 "is_configured": false, 00:20:18.370 "data_offset": 0, 00:20:18.370 "data_size": 65536 00:20:18.370 }, 00:20:18.370 { 00:20:18.370 "name": "BaseBdev2", 00:20:18.370 "uuid": "11ec94b5-2001-4691-9d9d-432eca75a5eb", 00:20:18.370 "is_configured": true, 00:20:18.370 "data_offset": 0, 00:20:18.370 "data_size": 65536 00:20:18.370 } 00:20:18.370 ] 00:20:18.370 }' 00:20:18.370 05:39:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:18.370 05:39:22 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:18.370 05:39:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:18.628 05:39:22 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:18.629 05:39:22 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:18.629 [2024-10-07 05:39:22.596749] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:18.629 [2024-10-07 05:39:22.596794] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:18.887 [2024-10-07 05:39:22.608470] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d096f0 00:20:18.887 [2024-10-07 05:39:22.610318] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:18.887 05:39:22 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:19.822 05:39:23 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:19.822 05:39:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:19.822 05:39:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:19.822 05:39:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:19.822 05:39:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:19.822 05:39:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.822 05:39:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.081 05:39:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:20.081 "name": "raid_bdev1", 00:20:20.081 "uuid": "a7ac2430-24d0-42f2-9894-1e2f7f2cf01e", 00:20:20.081 "strip_size_kb": 0, 00:20:20.081 "state": "online", 00:20:20.081 "raid_level": "raid1", 00:20:20.081 "superblock": false, 00:20:20.081 "num_base_bdevs": 2, 00:20:20.081 "num_base_bdevs_discovered": 2, 00:20:20.081 "num_base_bdevs_operational": 2, 00:20:20.081 "process": { 00:20:20.081 "type": "rebuild", 00:20:20.081 "target": "spare", 00:20:20.081 "progress": { 00:20:20.081 "blocks": 24576, 00:20:20.081 "percent": 37 00:20:20.081 } 00:20:20.081 }, 00:20:20.081 "base_bdevs_list": [ 00:20:20.081 { 00:20:20.081 "name": "spare", 00:20:20.081 "uuid": "4110a92a-8e3a-593e-a081-c627ba9a4412", 00:20:20.081 "is_configured": true, 00:20:20.081 "data_offset": 0, 00:20:20.081 "data_size": 65536 00:20:20.081 }, 00:20:20.081 { 00:20:20.081 "name": "BaseBdev2", 00:20:20.081 "uuid": "11ec94b5-2001-4691-9d9d-432eca75a5eb", 00:20:20.081 "is_configured": true, 00:20:20.081 "data_offset": 0, 00:20:20.081 "data_size": 65536 00:20:20.081 } 00:20:20.081 ] 00:20:20.081 }' 00:20:20.081 05:39:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:20.081 05:39:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:20.081 05:39:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:20.081 05:39:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:20.081 05:39:23 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:20:20.081 05:39:23 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:20.081 05:39:23 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:20.081 05:39:23 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:20.081 05:39:23 -- bdev/bdev_raid.sh@657 -- # local timeout=403 00:20:20.081 05:39:23 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:20.081 05:39:23 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:20.081 05:39:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:20.082 05:39:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:20.082 05:39:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:20.082 05:39:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:20.082 05:39:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.082 05:39:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.340 05:39:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:20.340 "name": "raid_bdev1", 00:20:20.340 "uuid": "a7ac2430-24d0-42f2-9894-1e2f7f2cf01e", 00:20:20.340 "strip_size_kb": 0, 00:20:20.340 "state": "online", 00:20:20.340 "raid_level": "raid1", 00:20:20.340 "superblock": false, 00:20:20.340 "num_base_bdevs": 2, 00:20:20.340 "num_base_bdevs_discovered": 2, 00:20:20.340 "num_base_bdevs_operational": 2, 00:20:20.340 "process": { 00:20:20.340 "type": "rebuild", 00:20:20.340 "target": "spare", 00:20:20.340 "progress": { 00:20:20.340 "blocks": 30720, 00:20:20.340 "percent": 46 00:20:20.340 } 00:20:20.340 }, 00:20:20.340 "base_bdevs_list": [ 00:20:20.340 { 00:20:20.340 "name": "spare", 00:20:20.340 "uuid": "4110a92a-8e3a-593e-a081-c627ba9a4412", 00:20:20.340 "is_configured": true, 00:20:20.340 "data_offset": 0, 00:20:20.340 "data_size": 65536 00:20:20.340 }, 00:20:20.340 { 00:20:20.340 "name": "BaseBdev2", 00:20:20.340 "uuid": "11ec94b5-2001-4691-9d9d-432eca75a5eb", 00:20:20.340 "is_configured": true, 00:20:20.340 "data_offset": 0, 00:20:20.340 "data_size": 65536 00:20:20.340 } 00:20:20.340 ] 00:20:20.340 }' 00:20:20.340 05:39:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:20.340 05:39:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:20.340 05:39:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:20.340 05:39:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:20.340 05:39:24 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:21.346 05:39:25 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:21.346 05:39:25 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:21.346 05:39:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:21.346 05:39:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:21.346 05:39:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:21.346 05:39:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:21.346 05:39:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.346 05:39:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.604 05:39:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:21.604 "name": "raid_bdev1", 00:20:21.604 "uuid": "a7ac2430-24d0-42f2-9894-1e2f7f2cf01e", 00:20:21.604 "strip_size_kb": 0, 00:20:21.604 "state": "online", 00:20:21.604 "raid_level": "raid1", 00:20:21.604 "superblock": false, 00:20:21.604 "num_base_bdevs": 2, 00:20:21.604 "num_base_bdevs_discovered": 2, 00:20:21.604 "num_base_bdevs_operational": 2, 00:20:21.604 "process": { 00:20:21.604 "type": "rebuild", 00:20:21.604 "target": "spare", 00:20:21.604 "progress": { 00:20:21.604 "blocks": 57344, 00:20:21.604 "percent": 87 00:20:21.604 } 00:20:21.604 }, 00:20:21.604 "base_bdevs_list": [ 00:20:21.604 { 00:20:21.604 "name": "spare", 00:20:21.604 "uuid": "4110a92a-8e3a-593e-a081-c627ba9a4412", 00:20:21.604 "is_configured": true, 00:20:21.604 "data_offset": 0, 00:20:21.604 "data_size": 65536 00:20:21.605 }, 00:20:21.605 { 00:20:21.605 "name": "BaseBdev2", 00:20:21.605 "uuid": "11ec94b5-2001-4691-9d9d-432eca75a5eb", 00:20:21.605 "is_configured": true, 00:20:21.605 "data_offset": 0, 00:20:21.605 "data_size": 65536 00:20:21.605 } 00:20:21.605 ] 00:20:21.605 }' 00:20:21.605 05:39:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:21.605 05:39:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:21.605 05:39:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:21.864 05:39:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:21.864 05:39:25 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:21.864 [2024-10-07 05:39:25.828798] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:21.864 [2024-10-07 05:39:25.828896] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:21.864 [2024-10-07 05:39:25.829014] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:22.800 05:39:26 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:22.800 05:39:26 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:22.800 05:39:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:22.800 05:39:26 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:22.800 05:39:26 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:22.800 05:39:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:22.800 05:39:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.800 05:39:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.066 05:39:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:23.067 "name": "raid_bdev1", 00:20:23.067 "uuid": "a7ac2430-24d0-42f2-9894-1e2f7f2cf01e", 00:20:23.067 "strip_size_kb": 0, 00:20:23.067 "state": "online", 00:20:23.067 "raid_level": "raid1", 00:20:23.067 "superblock": false, 00:20:23.067 "num_base_bdevs": 2, 00:20:23.067 "num_base_bdevs_discovered": 2, 00:20:23.067 "num_base_bdevs_operational": 2, 00:20:23.067 "base_bdevs_list": [ 00:20:23.067 { 00:20:23.067 "name": "spare", 00:20:23.067 "uuid": "4110a92a-8e3a-593e-a081-c627ba9a4412", 00:20:23.067 "is_configured": true, 00:20:23.067 "data_offset": 0, 00:20:23.067 "data_size": 65536 00:20:23.067 }, 00:20:23.067 { 00:20:23.067 "name": "BaseBdev2", 00:20:23.067 "uuid": "11ec94b5-2001-4691-9d9d-432eca75a5eb", 00:20:23.067 "is_configured": true, 00:20:23.067 "data_offset": 0, 00:20:23.067 "data_size": 65536 00:20:23.067 } 00:20:23.067 ] 00:20:23.067 }' 00:20:23.067 05:39:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:23.067 05:39:26 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:23.067 05:39:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:23.067 05:39:26 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:23.067 05:39:26 -- bdev/bdev_raid.sh@660 -- # break 00:20:23.067 05:39:26 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:23.067 05:39:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:23.067 05:39:26 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:23.067 05:39:26 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:23.067 05:39:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:23.067 05:39:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.068 05:39:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.329 05:39:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:23.329 "name": "raid_bdev1", 00:20:23.329 "uuid": "a7ac2430-24d0-42f2-9894-1e2f7f2cf01e", 00:20:23.329 "strip_size_kb": 0, 00:20:23.329 "state": "online", 00:20:23.329 "raid_level": "raid1", 00:20:23.329 "superblock": false, 00:20:23.329 "num_base_bdevs": 2, 00:20:23.329 "num_base_bdevs_discovered": 2, 00:20:23.329 "num_base_bdevs_operational": 2, 00:20:23.329 "base_bdevs_list": [ 00:20:23.329 { 00:20:23.329 "name": "spare", 00:20:23.329 "uuid": "4110a92a-8e3a-593e-a081-c627ba9a4412", 00:20:23.329 "is_configured": true, 00:20:23.329 "data_offset": 0, 00:20:23.329 "data_size": 65536 00:20:23.329 }, 00:20:23.329 { 00:20:23.329 "name": "BaseBdev2", 00:20:23.329 "uuid": "11ec94b5-2001-4691-9d9d-432eca75a5eb", 00:20:23.329 "is_configured": true, 00:20:23.329 "data_offset": 0, 00:20:23.329 "data_size": 65536 00:20:23.329 } 00:20:23.329 ] 00:20:23.329 }' 00:20:23.329 05:39:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:23.329 05:39:27 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:23.329 05:39:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:23.329 05:39:27 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:23.329 05:39:27 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:23.329 05:39:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:23.329 05:39:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:23.329 05:39:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:23.329 05:39:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:23.329 05:39:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:23.329 05:39:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:23.329 05:39:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:23.329 05:39:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:23.329 05:39:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:23.329 05:39:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.329 05:39:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.588 05:39:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:23.588 "name": "raid_bdev1", 00:20:23.588 "uuid": "a7ac2430-24d0-42f2-9894-1e2f7f2cf01e", 00:20:23.588 "strip_size_kb": 0, 00:20:23.588 "state": "online", 00:20:23.588 "raid_level": "raid1", 00:20:23.588 "superblock": false, 00:20:23.588 "num_base_bdevs": 2, 00:20:23.588 "num_base_bdevs_discovered": 2, 00:20:23.588 "num_base_bdevs_operational": 2, 00:20:23.588 "base_bdevs_list": [ 00:20:23.588 { 00:20:23.588 "name": "spare", 00:20:23.588 "uuid": "4110a92a-8e3a-593e-a081-c627ba9a4412", 00:20:23.588 "is_configured": true, 00:20:23.588 "data_offset": 0, 00:20:23.588 "data_size": 65536 00:20:23.588 }, 00:20:23.588 { 00:20:23.588 "name": "BaseBdev2", 00:20:23.588 "uuid": "11ec94b5-2001-4691-9d9d-432eca75a5eb", 00:20:23.588 "is_configured": true, 00:20:23.588 "data_offset": 0, 00:20:23.588 "data_size": 65536 00:20:23.588 } 00:20:23.588 ] 00:20:23.588 }' 00:20:23.588 05:39:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:23.588 05:39:27 -- common/autotest_common.sh@10 -- # set +x 00:20:24.523 05:39:28 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:24.523 [2024-10-07 05:39:28.385469] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:24.523 [2024-10-07 05:39:28.385508] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:24.523 [2024-10-07 05:39:28.385657] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:24.523 [2024-10-07 05:39:28.385741] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:24.523 [2024-10-07 05:39:28.385754] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:20:24.523 05:39:28 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.523 05:39:28 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:24.782 05:39:28 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:24.782 05:39:28 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:20:24.782 05:39:28 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:24.782 05:39:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:24.782 05:39:28 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:24.782 05:39:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:24.782 05:39:28 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:24.782 05:39:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:24.782 05:39:28 -- bdev/nbd_common.sh@12 -- # local i 00:20:24.782 05:39:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:24.782 05:39:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:24.782 05:39:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:25.041 /dev/nbd0 00:20:25.041 05:39:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:25.041 05:39:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:25.041 05:39:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:25.041 05:39:28 -- common/autotest_common.sh@857 -- # local i 00:20:25.041 05:39:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:25.041 05:39:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:25.041 05:39:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:25.041 05:39:28 -- common/autotest_common.sh@861 -- # break 00:20:25.041 05:39:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:25.041 05:39:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:25.041 05:39:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:25.041 1+0 records in 00:20:25.041 1+0 records out 00:20:25.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479984 s, 8.5 MB/s 00:20:25.041 05:39:28 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:25.041 05:39:28 -- common/autotest_common.sh@874 -- # size=4096 00:20:25.041 05:39:28 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:25.041 05:39:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:25.041 05:39:28 -- common/autotest_common.sh@877 -- # return 0 00:20:25.041 05:39:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:25.041 05:39:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:25.041 05:39:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:25.609 /dev/nbd1 00:20:25.609 05:39:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:25.609 05:39:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:25.609 05:39:29 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:20:25.609 05:39:29 -- common/autotest_common.sh@857 -- # local i 00:20:25.609 05:39:29 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:25.609 05:39:29 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:25.609 05:39:29 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:20:25.609 05:39:29 -- common/autotest_common.sh@861 -- # break 00:20:25.609 05:39:29 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:25.609 05:39:29 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:25.609 05:39:29 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:25.609 1+0 records in 00:20:25.609 1+0 records out 00:20:25.609 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000735084 s, 5.6 MB/s 00:20:25.609 05:39:29 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:25.609 05:39:29 -- common/autotest_common.sh@874 -- # size=4096 00:20:25.609 05:39:29 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:25.609 05:39:29 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:25.609 05:39:29 -- common/autotest_common.sh@877 -- # return 0 00:20:25.609 05:39:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:25.609 05:39:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:25.609 05:39:29 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:25.609 05:39:29 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:25.609 05:39:29 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:25.609 05:39:29 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:25.609 05:39:29 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:25.609 05:39:29 -- bdev/nbd_common.sh@51 -- # local i 00:20:25.609 05:39:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:25.609 05:39:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:25.868 05:39:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:25.868 05:39:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:25.868 05:39:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:25.868 05:39:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:25.868 05:39:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:25.868 05:39:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:25.868 05:39:29 -- bdev/nbd_common.sh@41 -- # break 00:20:25.868 05:39:29 -- bdev/nbd_common.sh@45 -- # return 0 00:20:25.868 05:39:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:25.868 05:39:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:26.127 05:39:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:26.127 05:39:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:26.127 05:39:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:26.127 05:39:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:26.127 05:39:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:26.127 05:39:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:26.127 05:39:30 -- bdev/nbd_common.sh@41 -- # break 00:20:26.127 05:39:30 -- bdev/nbd_common.sh@45 -- # return 0 00:20:26.127 05:39:30 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:20:26.127 05:39:30 -- bdev/bdev_raid.sh@709 -- # killprocess 164500 00:20:26.127 05:39:30 -- common/autotest_common.sh@926 -- # '[' -z 164500 ']' 00:20:26.127 05:39:30 -- common/autotest_common.sh@930 -- # kill -0 164500 00:20:26.127 05:39:30 -- common/autotest_common.sh@931 -- # uname 00:20:26.127 05:39:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:26.127 05:39:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 164500 00:20:26.127 killing process with pid 164500 00:20:26.127 Received shutdown signal, test time was about 60.000000 seconds 00:20:26.127 00:20:26.127 Latency(us) 00:20:26.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.127 =================================================================================================================== 00:20:26.127 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:26.127 05:39:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:26.127 05:39:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:26.127 05:39:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 164500' 00:20:26.127 05:39:30 -- common/autotest_common.sh@945 -- # kill 164500 00:20:26.127 05:39:30 -- common/autotest_common.sh@950 -- # wait 164500 00:20:26.128 [2024-10-07 05:39:30.088812] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:26.387 [2024-10-07 05:39:30.303533] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:27.766 ************************************ 00:20:27.766 END TEST raid_rebuild_test 00:20:27.766 ************************************ 00:20:27.766 05:39:31 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:27.766 00:20:27.766 real 0m22.721s 00:20:27.766 user 0m30.697s 00:20:27.766 sys 0m4.176s 00:20:27.766 05:39:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:27.766 05:39:31 -- common/autotest_common.sh@10 -- # set +x 00:20:27.766 05:39:31 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false 00:20:27.766 05:39:31 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:27.766 05:39:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:27.766 05:39:31 -- common/autotest_common.sh@10 -- # set +x 00:20:27.766 ************************************ 00:20:27.766 START TEST raid_rebuild_test_sb 00:20:27.766 ************************************ 00:20:27.766 05:39:31 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true false 00:20:27.766 05:39:31 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:27.766 05:39:31 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:27.766 05:39:31 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:20:27.766 05:39:31 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:20:27.766 05:39:31 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:27.766 05:39:31 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:27.766 05:39:31 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:27.766 05:39:31 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:27.766 05:39:31 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:27.766 05:39:31 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:27.766 05:39:31 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:27.766 05:39:31 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:27.766 05:39:31 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:27.766 05:39:31 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:27.766 05:39:31 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:27.766 05:39:31 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:27.766 05:39:31 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:27.766 05:39:31 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:27.766 05:39:31 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:27.766 05:39:31 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:27.766 05:39:31 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:27.766 05:39:31 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:20:27.766 05:39:31 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:20:27.766 05:39:31 -- bdev/bdev_raid.sh@544 -- # raid_pid=165936 00:20:27.766 05:39:31 -- bdev/bdev_raid.sh@545 -- # waitforlisten 165936 /var/tmp/spdk-raid.sock 00:20:27.766 05:39:31 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:27.767 05:39:31 -- common/autotest_common.sh@819 -- # '[' -z 165936 ']' 00:20:27.767 05:39:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:27.767 05:39:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:27.767 05:39:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:27.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:27.767 05:39:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:27.767 05:39:31 -- common/autotest_common.sh@10 -- # set +x 00:20:27.767 [2024-10-07 05:39:31.510375] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:20:27.767 [2024-10-07 05:39:31.510898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165936 ] 00:20:27.767 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:27.767 Zero copy mechanism will not be used. 00:20:27.767 [2024-10-07 05:39:31.673448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.026 [2024-10-07 05:39:31.893476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.286 [2024-10-07 05:39:32.089164] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:28.545 05:39:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:28.545 05:39:32 -- common/autotest_common.sh@852 -- # return 0 00:20:28.546 05:39:32 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:28.546 05:39:32 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:28.546 05:39:32 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:28.805 BaseBdev1_malloc 00:20:28.805 05:39:32 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:29.064 [2024-10-07 05:39:32.860057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:29.064 [2024-10-07 05:39:32.860801] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:29.065 [2024-10-07 05:39:32.861094] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:20:29.065 [2024-10-07 05:39:32.861387] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:29.065 [2024-10-07 05:39:32.864193] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:29.065 [2024-10-07 05:39:32.864487] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:29.065 BaseBdev1 00:20:29.065 05:39:32 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:29.065 05:39:32 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:29.065 05:39:32 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:29.324 BaseBdev2_malloc 00:20:29.324 05:39:33 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:29.583 [2024-10-07 05:39:33.355335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:29.583 [2024-10-07 05:39:33.355787] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:29.583 [2024-10-07 05:39:33.356068] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:29.583 [2024-10-07 05:39:33.356354] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:29.583 [2024-10-07 05:39:33.358786] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:29.583 [2024-10-07 05:39:33.359112] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:29.583 BaseBdev2 00:20:29.583 05:39:33 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:29.843 spare_malloc 00:20:29.843 05:39:33 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:30.102 spare_delay 00:20:30.102 05:39:33 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:30.361 [2024-10-07 05:39:34.098181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:30.361 [2024-10-07 05:39:34.098936] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.361 [2024-10-07 05:39:34.099286] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:20:30.361 [2024-10-07 05:39:34.099637] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.361 [2024-10-07 05:39:34.102194] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.361 [2024-10-07 05:39:34.102517] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:30.361 spare 00:20:30.361 05:39:34 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:30.361 [2024-10-07 05:39:34.335067] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:30.361 [2024-10-07 05:39:34.337337] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:30.361 [2024-10-07 05:39:34.337747] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:20:30.361 [2024-10-07 05:39:34.337927] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:30.361 [2024-10-07 05:39:34.338121] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:20:30.361 [2024-10-07 05:39:34.338667] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:20:30.361 [2024-10-07 05:39:34.338811] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:20:30.361 [2024-10-07 05:39:34.339181] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:30.620 05:39:34 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:30.620 05:39:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:30.620 05:39:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:30.620 05:39:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:30.620 05:39:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:30.620 05:39:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:30.620 05:39:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:30.620 05:39:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:30.620 05:39:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:30.620 05:39:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:30.620 05:39:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.620 05:39:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.620 05:39:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:30.620 "name": "raid_bdev1", 00:20:30.620 "uuid": "e36b99d8-9567-49f9-8a67-a36208ed1da2", 00:20:30.620 "strip_size_kb": 0, 00:20:30.620 "state": "online", 00:20:30.620 "raid_level": "raid1", 00:20:30.620 "superblock": true, 00:20:30.620 "num_base_bdevs": 2, 00:20:30.620 "num_base_bdevs_discovered": 2, 00:20:30.620 "num_base_bdevs_operational": 2, 00:20:30.620 "base_bdevs_list": [ 00:20:30.620 { 00:20:30.620 "name": "BaseBdev1", 00:20:30.620 "uuid": "c3f705ed-9298-5f53-8d4d-9a1570eea1a4", 00:20:30.620 "is_configured": true, 00:20:30.620 "data_offset": 2048, 00:20:30.620 "data_size": 63488 00:20:30.620 }, 00:20:30.620 { 00:20:30.620 "name": "BaseBdev2", 00:20:30.620 "uuid": "1db2d665-2ca1-5fef-9833-bb256e2bba6e", 00:20:30.620 "is_configured": true, 00:20:30.620 "data_offset": 2048, 00:20:30.620 "data_size": 63488 00:20:30.620 } 00:20:30.620 ] 00:20:30.620 }' 00:20:30.620 05:39:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:30.620 05:39:34 -- common/autotest_common.sh@10 -- # set +x 00:20:31.189 05:39:35 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:31.189 05:39:35 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:31.447 [2024-10-07 05:39:35.411535] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:31.706 05:39:35 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:20:31.706 05:39:35 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.706 05:39:35 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:31.706 05:39:35 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:20:31.706 05:39:35 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:20:31.706 05:39:35 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:20:31.706 05:39:35 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:31.706 05:39:35 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:31.706 05:39:35 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:31.706 05:39:35 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:31.706 05:39:35 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:31.706 05:39:35 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:31.706 05:39:35 -- bdev/nbd_common.sh@12 -- # local i 00:20:31.706 05:39:35 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:31.706 05:39:35 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:31.706 05:39:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:31.965 [2024-10-07 05:39:35.883509] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:20:31.965 /dev/nbd0 00:20:31.965 05:39:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:31.965 05:39:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:31.965 05:39:35 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:31.965 05:39:35 -- common/autotest_common.sh@857 -- # local i 00:20:31.965 05:39:35 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:31.965 05:39:35 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:31.965 05:39:35 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:32.224 05:39:35 -- common/autotest_common.sh@861 -- # break 00:20:32.224 05:39:35 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:32.224 05:39:35 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:32.224 05:39:35 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:32.224 1+0 records in 00:20:32.224 1+0 records out 00:20:32.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295307 s, 13.9 MB/s 00:20:32.224 05:39:35 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:32.224 05:39:35 -- common/autotest_common.sh@874 -- # size=4096 00:20:32.224 05:39:35 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:32.224 05:39:35 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:32.224 05:39:35 -- common/autotest_common.sh@877 -- # return 0 00:20:32.224 05:39:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:32.224 05:39:35 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:32.224 05:39:35 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:20:32.224 05:39:35 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:20:32.224 05:39:35 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:20:37.497 63488+0 records in 00:20:37.497 63488+0 records out 00:20:37.497 32505856 bytes (33 MB, 31 MiB) copied, 5.08076 s, 6.4 MB/s 00:20:37.497 05:39:41 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:37.497 05:39:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:37.497 05:39:41 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:37.497 05:39:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:37.497 05:39:41 -- bdev/nbd_common.sh@51 -- # local i 00:20:37.497 05:39:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:37.497 05:39:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:37.497 05:39:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:37.497 05:39:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:37.497 05:39:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:37.497 05:39:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:37.497 05:39:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:37.497 05:39:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:37.497 [2024-10-07 05:39:41.329908] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.497 05:39:41 -- bdev/nbd_common.sh@41 -- # break 00:20:37.497 05:39:41 -- bdev/nbd_common.sh@45 -- # return 0 00:20:37.497 05:39:41 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:37.774 [2024-10-07 05:39:41.525701] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:37.774 05:39:41 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:37.774 05:39:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:37.774 05:39:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:37.774 05:39:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:37.774 05:39:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:37.774 05:39:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:37.774 05:39:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:37.774 05:39:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:37.774 05:39:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:37.774 05:39:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:37.774 05:39:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.774 05:39:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.046 05:39:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:38.046 "name": "raid_bdev1", 00:20:38.046 "uuid": "e36b99d8-9567-49f9-8a67-a36208ed1da2", 00:20:38.046 "strip_size_kb": 0, 00:20:38.046 "state": "online", 00:20:38.046 "raid_level": "raid1", 00:20:38.046 "superblock": true, 00:20:38.046 "num_base_bdevs": 2, 00:20:38.046 "num_base_bdevs_discovered": 1, 00:20:38.046 "num_base_bdevs_operational": 1, 00:20:38.046 "base_bdevs_list": [ 00:20:38.046 { 00:20:38.046 "name": null, 00:20:38.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.046 "is_configured": false, 00:20:38.046 "data_offset": 2048, 00:20:38.046 "data_size": 63488 00:20:38.046 }, 00:20:38.046 { 00:20:38.046 "name": "BaseBdev2", 00:20:38.046 "uuid": "1db2d665-2ca1-5fef-9833-bb256e2bba6e", 00:20:38.046 "is_configured": true, 00:20:38.046 "data_offset": 2048, 00:20:38.046 "data_size": 63488 00:20:38.046 } 00:20:38.046 ] 00:20:38.046 }' 00:20:38.046 05:39:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:38.046 05:39:41 -- common/autotest_common.sh@10 -- # set +x 00:20:38.614 05:39:42 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:38.874 [2024-10-07 05:39:42.633999] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:38.874 [2024-10-07 05:39:42.634178] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:38.874 [2024-10-07 05:39:42.647700] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca2e80 00:20:38.874 [2024-10-07 05:39:42.649838] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:38.874 05:39:42 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:39.810 05:39:43 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:39.810 05:39:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:39.810 05:39:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:39.810 05:39:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:39.810 05:39:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:39.810 05:39:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.810 05:39:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.069 05:39:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:40.069 "name": "raid_bdev1", 00:20:40.069 "uuid": "e36b99d8-9567-49f9-8a67-a36208ed1da2", 00:20:40.069 "strip_size_kb": 0, 00:20:40.069 "state": "online", 00:20:40.069 "raid_level": "raid1", 00:20:40.069 "superblock": true, 00:20:40.069 "num_base_bdevs": 2, 00:20:40.070 "num_base_bdevs_discovered": 2, 00:20:40.070 "num_base_bdevs_operational": 2, 00:20:40.070 "process": { 00:20:40.070 "type": "rebuild", 00:20:40.070 "target": "spare", 00:20:40.070 "progress": { 00:20:40.070 "blocks": 24576, 00:20:40.070 "percent": 38 00:20:40.070 } 00:20:40.070 }, 00:20:40.070 "base_bdevs_list": [ 00:20:40.070 { 00:20:40.070 "name": "spare", 00:20:40.070 "uuid": "5e483563-bdae-5c3a-b724-6a1cb97162f5", 00:20:40.070 "is_configured": true, 00:20:40.070 "data_offset": 2048, 00:20:40.070 "data_size": 63488 00:20:40.070 }, 00:20:40.070 { 00:20:40.070 "name": "BaseBdev2", 00:20:40.070 "uuid": "1db2d665-2ca1-5fef-9833-bb256e2bba6e", 00:20:40.070 "is_configured": true, 00:20:40.070 "data_offset": 2048, 00:20:40.070 "data_size": 63488 00:20:40.070 } 00:20:40.070 ] 00:20:40.070 }' 00:20:40.070 05:39:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:40.070 05:39:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:40.070 05:39:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:40.070 05:39:43 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:40.070 05:39:43 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:40.329 [2024-10-07 05:39:44.171406] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:40.329 [2024-10-07 05:39:44.260573] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:40.329 [2024-10-07 05:39:44.260780] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:40.329 05:39:44 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:40.329 05:39:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:40.329 05:39:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:40.329 05:39:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:40.329 05:39:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:40.329 05:39:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:40.329 05:39:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:40.329 05:39:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:40.329 05:39:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:40.329 05:39:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:40.329 05:39:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.329 05:39:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.588 05:39:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:40.588 "name": "raid_bdev1", 00:20:40.588 "uuid": "e36b99d8-9567-49f9-8a67-a36208ed1da2", 00:20:40.588 "strip_size_kb": 0, 00:20:40.588 "state": "online", 00:20:40.588 "raid_level": "raid1", 00:20:40.588 "superblock": true, 00:20:40.588 "num_base_bdevs": 2, 00:20:40.588 "num_base_bdevs_discovered": 1, 00:20:40.588 "num_base_bdevs_operational": 1, 00:20:40.588 "base_bdevs_list": [ 00:20:40.588 { 00:20:40.588 "name": null, 00:20:40.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.588 "is_configured": false, 00:20:40.588 "data_offset": 2048, 00:20:40.588 "data_size": 63488 00:20:40.588 }, 00:20:40.588 { 00:20:40.588 "name": "BaseBdev2", 00:20:40.588 "uuid": "1db2d665-2ca1-5fef-9833-bb256e2bba6e", 00:20:40.588 "is_configured": true, 00:20:40.588 "data_offset": 2048, 00:20:40.588 "data_size": 63488 00:20:40.588 } 00:20:40.588 ] 00:20:40.588 }' 00:20:40.588 05:39:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:40.588 05:39:44 -- common/autotest_common.sh@10 -- # set +x 00:20:41.524 05:39:45 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:41.524 05:39:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:41.524 05:39:45 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:41.525 05:39:45 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:41.525 05:39:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:41.525 05:39:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.525 05:39:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.525 05:39:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:41.525 "name": "raid_bdev1", 00:20:41.525 "uuid": "e36b99d8-9567-49f9-8a67-a36208ed1da2", 00:20:41.525 "strip_size_kb": 0, 00:20:41.525 "state": "online", 00:20:41.525 "raid_level": "raid1", 00:20:41.525 "superblock": true, 00:20:41.525 "num_base_bdevs": 2, 00:20:41.525 "num_base_bdevs_discovered": 1, 00:20:41.525 "num_base_bdevs_operational": 1, 00:20:41.525 "base_bdevs_list": [ 00:20:41.525 { 00:20:41.525 "name": null, 00:20:41.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.525 "is_configured": false, 00:20:41.525 "data_offset": 2048, 00:20:41.525 "data_size": 63488 00:20:41.525 }, 00:20:41.525 { 00:20:41.525 "name": "BaseBdev2", 00:20:41.525 "uuid": "1db2d665-2ca1-5fef-9833-bb256e2bba6e", 00:20:41.525 "is_configured": true, 00:20:41.525 "data_offset": 2048, 00:20:41.525 "data_size": 63488 00:20:41.525 } 00:20:41.525 ] 00:20:41.525 }' 00:20:41.525 05:39:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:41.525 05:39:45 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:41.525 05:39:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:41.525 05:39:45 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:41.525 05:39:45 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:41.783 [2024-10-07 05:39:45.721192] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:41.783 [2024-10-07 05:39:45.721453] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:41.783 [2024-10-07 05:39:45.733845] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3020 00:20:41.783 [2024-10-07 05:39:45.735757] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:41.783 05:39:45 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:43.157 05:39:46 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:43.157 05:39:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:43.157 05:39:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:43.157 05:39:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:43.157 05:39:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:43.157 05:39:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.157 05:39:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.157 05:39:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:43.157 "name": "raid_bdev1", 00:20:43.157 "uuid": "e36b99d8-9567-49f9-8a67-a36208ed1da2", 00:20:43.157 "strip_size_kb": 0, 00:20:43.157 "state": "online", 00:20:43.157 "raid_level": "raid1", 00:20:43.157 "superblock": true, 00:20:43.157 "num_base_bdevs": 2, 00:20:43.157 "num_base_bdevs_discovered": 2, 00:20:43.157 "num_base_bdevs_operational": 2, 00:20:43.157 "process": { 00:20:43.157 "type": "rebuild", 00:20:43.157 "target": "spare", 00:20:43.157 "progress": { 00:20:43.157 "blocks": 24576, 00:20:43.157 "percent": 38 00:20:43.157 } 00:20:43.157 }, 00:20:43.157 "base_bdevs_list": [ 00:20:43.157 { 00:20:43.157 "name": "spare", 00:20:43.157 "uuid": "5e483563-bdae-5c3a-b724-6a1cb97162f5", 00:20:43.157 "is_configured": true, 00:20:43.157 "data_offset": 2048, 00:20:43.157 "data_size": 63488 00:20:43.157 }, 00:20:43.157 { 00:20:43.157 "name": "BaseBdev2", 00:20:43.157 "uuid": "1db2d665-2ca1-5fef-9833-bb256e2bba6e", 00:20:43.157 "is_configured": true, 00:20:43.157 "data_offset": 2048, 00:20:43.157 "data_size": 63488 00:20:43.157 } 00:20:43.157 ] 00:20:43.157 }' 00:20:43.157 05:39:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:43.157 05:39:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:43.157 05:39:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:43.414 05:39:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:43.414 05:39:47 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:20:43.414 05:39:47 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:20:43.414 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:20:43.414 05:39:47 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:43.414 05:39:47 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:43.414 05:39:47 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:43.414 05:39:47 -- bdev/bdev_raid.sh@657 -- # local timeout=427 00:20:43.415 05:39:47 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:43.415 05:39:47 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:43.415 05:39:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:43.415 05:39:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:43.415 05:39:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:43.415 05:39:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:43.415 05:39:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.415 05:39:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.672 05:39:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:43.672 "name": "raid_bdev1", 00:20:43.672 "uuid": "e36b99d8-9567-49f9-8a67-a36208ed1da2", 00:20:43.672 "strip_size_kb": 0, 00:20:43.672 "state": "online", 00:20:43.672 "raid_level": "raid1", 00:20:43.672 "superblock": true, 00:20:43.672 "num_base_bdevs": 2, 00:20:43.672 "num_base_bdevs_discovered": 2, 00:20:43.672 "num_base_bdevs_operational": 2, 00:20:43.672 "process": { 00:20:43.672 "type": "rebuild", 00:20:43.672 "target": "spare", 00:20:43.672 "progress": { 00:20:43.672 "blocks": 32768, 00:20:43.672 "percent": 51 00:20:43.672 } 00:20:43.672 }, 00:20:43.672 "base_bdevs_list": [ 00:20:43.672 { 00:20:43.672 "name": "spare", 00:20:43.672 "uuid": "5e483563-bdae-5c3a-b724-6a1cb97162f5", 00:20:43.672 "is_configured": true, 00:20:43.672 "data_offset": 2048, 00:20:43.672 "data_size": 63488 00:20:43.672 }, 00:20:43.672 { 00:20:43.672 "name": "BaseBdev2", 00:20:43.672 "uuid": "1db2d665-2ca1-5fef-9833-bb256e2bba6e", 00:20:43.672 "is_configured": true, 00:20:43.672 "data_offset": 2048, 00:20:43.672 "data_size": 63488 00:20:43.672 } 00:20:43.672 ] 00:20:43.672 }' 00:20:43.672 05:39:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:43.672 05:39:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:43.672 05:39:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:43.672 05:39:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:43.672 05:39:47 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:44.607 05:39:48 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:44.607 05:39:48 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:44.607 05:39:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:44.607 05:39:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:44.607 05:39:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:44.607 05:39:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:44.607 05:39:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.607 05:39:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.866 05:39:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:44.866 "name": "raid_bdev1", 00:20:44.866 "uuid": "e36b99d8-9567-49f9-8a67-a36208ed1da2", 00:20:44.866 "strip_size_kb": 0, 00:20:44.866 "state": "online", 00:20:44.866 "raid_level": "raid1", 00:20:44.866 "superblock": true, 00:20:44.866 "num_base_bdevs": 2, 00:20:44.866 "num_base_bdevs_discovered": 2, 00:20:44.866 "num_base_bdevs_operational": 2, 00:20:44.866 "process": { 00:20:44.866 "type": "rebuild", 00:20:44.866 "target": "spare", 00:20:44.866 "progress": { 00:20:44.866 "blocks": 61440, 00:20:44.866 "percent": 96 00:20:44.866 } 00:20:44.866 }, 00:20:44.866 "base_bdevs_list": [ 00:20:44.866 { 00:20:44.866 "name": "spare", 00:20:44.866 "uuid": "5e483563-bdae-5c3a-b724-6a1cb97162f5", 00:20:44.866 "is_configured": true, 00:20:44.866 "data_offset": 2048, 00:20:44.866 "data_size": 63488 00:20:44.866 }, 00:20:44.866 { 00:20:44.866 "name": "BaseBdev2", 00:20:44.866 "uuid": "1db2d665-2ca1-5fef-9833-bb256e2bba6e", 00:20:44.866 "is_configured": true, 00:20:44.866 "data_offset": 2048, 00:20:44.866 "data_size": 63488 00:20:44.866 } 00:20:44.866 ] 00:20:44.866 }' 00:20:44.866 05:39:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:44.866 05:39:48 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:44.866 05:39:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:45.124 [2024-10-07 05:39:48.856375] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:45.124 [2024-10-07 05:39:48.856449] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:45.124 [2024-10-07 05:39:48.856599] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:45.124 05:39:48 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:45.124 05:39:48 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:46.060 05:39:49 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:46.060 05:39:49 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:46.060 05:39:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:46.060 05:39:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:46.060 05:39:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:46.060 05:39:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:46.061 05:39:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.061 05:39:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.319 05:39:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:46.319 "name": "raid_bdev1", 00:20:46.319 "uuid": "e36b99d8-9567-49f9-8a67-a36208ed1da2", 00:20:46.319 "strip_size_kb": 0, 00:20:46.319 "state": "online", 00:20:46.319 "raid_level": "raid1", 00:20:46.319 "superblock": true, 00:20:46.319 "num_base_bdevs": 2, 00:20:46.319 "num_base_bdevs_discovered": 2, 00:20:46.319 "num_base_bdevs_operational": 2, 00:20:46.319 "base_bdevs_list": [ 00:20:46.319 { 00:20:46.319 "name": "spare", 00:20:46.319 "uuid": "5e483563-bdae-5c3a-b724-6a1cb97162f5", 00:20:46.319 "is_configured": true, 00:20:46.319 "data_offset": 2048, 00:20:46.319 "data_size": 63488 00:20:46.319 }, 00:20:46.320 { 00:20:46.320 "name": "BaseBdev2", 00:20:46.320 "uuid": "1db2d665-2ca1-5fef-9833-bb256e2bba6e", 00:20:46.320 "is_configured": true, 00:20:46.320 "data_offset": 2048, 00:20:46.320 "data_size": 63488 00:20:46.320 } 00:20:46.320 ] 00:20:46.320 }' 00:20:46.320 05:39:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:46.320 05:39:50 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:46.320 05:39:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:46.320 05:39:50 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:46.320 05:39:50 -- bdev/bdev_raid.sh@660 -- # break 00:20:46.320 05:39:50 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:46.320 05:39:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:46.320 05:39:50 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:46.320 05:39:50 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:46.320 05:39:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:46.320 05:39:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.320 05:39:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.578 05:39:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:46.578 "name": "raid_bdev1", 00:20:46.578 "uuid": "e36b99d8-9567-49f9-8a67-a36208ed1da2", 00:20:46.578 "strip_size_kb": 0, 00:20:46.578 "state": "online", 00:20:46.578 "raid_level": "raid1", 00:20:46.578 "superblock": true, 00:20:46.578 "num_base_bdevs": 2, 00:20:46.578 "num_base_bdevs_discovered": 2, 00:20:46.578 "num_base_bdevs_operational": 2, 00:20:46.578 "base_bdevs_list": [ 00:20:46.578 { 00:20:46.578 "name": "spare", 00:20:46.578 "uuid": "5e483563-bdae-5c3a-b724-6a1cb97162f5", 00:20:46.578 "is_configured": true, 00:20:46.578 "data_offset": 2048, 00:20:46.578 "data_size": 63488 00:20:46.578 }, 00:20:46.578 { 00:20:46.578 "name": "BaseBdev2", 00:20:46.578 "uuid": "1db2d665-2ca1-5fef-9833-bb256e2bba6e", 00:20:46.578 "is_configured": true, 00:20:46.578 "data_offset": 2048, 00:20:46.578 "data_size": 63488 00:20:46.578 } 00:20:46.578 ] 00:20:46.578 }' 00:20:46.578 05:39:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:46.844 05:39:50 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:46.844 05:39:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:46.844 05:39:50 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:46.844 05:39:50 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:46.844 05:39:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:46.844 05:39:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:46.844 05:39:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:46.844 05:39:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:46.844 05:39:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:46.844 05:39:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:46.844 05:39:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:46.844 05:39:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:46.844 05:39:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:46.844 05:39:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.844 05:39:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.844 05:39:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:46.844 "name": "raid_bdev1", 00:20:46.844 "uuid": "e36b99d8-9567-49f9-8a67-a36208ed1da2", 00:20:46.844 "strip_size_kb": 0, 00:20:46.844 "state": "online", 00:20:46.844 "raid_level": "raid1", 00:20:46.844 "superblock": true, 00:20:46.844 "num_base_bdevs": 2, 00:20:46.844 "num_base_bdevs_discovered": 2, 00:20:46.844 "num_base_bdevs_operational": 2, 00:20:46.844 "base_bdevs_list": [ 00:20:46.844 { 00:20:46.844 "name": "spare", 00:20:46.844 "uuid": "5e483563-bdae-5c3a-b724-6a1cb97162f5", 00:20:46.844 "is_configured": true, 00:20:46.844 "data_offset": 2048, 00:20:46.844 "data_size": 63488 00:20:46.844 }, 00:20:46.844 { 00:20:46.845 "name": "BaseBdev2", 00:20:46.845 "uuid": "1db2d665-2ca1-5fef-9833-bb256e2bba6e", 00:20:46.845 "is_configured": true, 00:20:46.845 "data_offset": 2048, 00:20:46.845 "data_size": 63488 00:20:46.845 } 00:20:46.845 ] 00:20:46.845 }' 00:20:46.845 05:39:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:46.845 05:39:50 -- common/autotest_common.sh@10 -- # set +x 00:20:47.413 05:39:51 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:47.670 [2024-10-07 05:39:51.638106] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:47.670 [2024-10-07 05:39:51.638143] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:47.670 [2024-10-07 05:39:51.638250] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:47.670 [2024-10-07 05:39:51.638330] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:47.670 [2024-10-07 05:39:51.638342] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:20:47.928 05:39:51 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.928 05:39:51 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:48.187 05:39:51 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:48.187 05:39:51 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:20:48.187 05:39:51 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:48.187 05:39:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:48.187 05:39:51 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:48.187 05:39:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:48.187 05:39:51 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:48.187 05:39:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:48.187 05:39:51 -- bdev/nbd_common.sh@12 -- # local i 00:20:48.187 05:39:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:48.187 05:39:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:48.187 05:39:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:48.187 /dev/nbd0 00:20:48.187 05:39:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:48.187 05:39:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:48.187 05:39:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:48.187 05:39:52 -- common/autotest_common.sh@857 -- # local i 00:20:48.187 05:39:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:48.187 05:39:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:48.187 05:39:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:48.187 05:39:52 -- common/autotest_common.sh@861 -- # break 00:20:48.187 05:39:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:48.187 05:39:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:48.187 05:39:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:48.187 1+0 records in 00:20:48.187 1+0 records out 00:20:48.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00066936 s, 6.1 MB/s 00:20:48.187 05:39:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:48.187 05:39:52 -- common/autotest_common.sh@874 -- # size=4096 00:20:48.187 05:39:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:48.187 05:39:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:48.187 05:39:52 -- common/autotest_common.sh@877 -- # return 0 00:20:48.187 05:39:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:48.187 05:39:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:48.187 05:39:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:48.446 /dev/nbd1 00:20:48.446 05:39:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:48.446 05:39:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:48.446 05:39:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:20:48.446 05:39:52 -- common/autotest_common.sh@857 -- # local i 00:20:48.446 05:39:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:48.446 05:39:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:48.446 05:39:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:20:48.446 05:39:52 -- common/autotest_common.sh@861 -- # break 00:20:48.446 05:39:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:48.446 05:39:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:48.446 05:39:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:48.446 1+0 records in 00:20:48.446 1+0 records out 00:20:48.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000592224 s, 6.9 MB/s 00:20:48.446 05:39:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:48.446 05:39:52 -- common/autotest_common.sh@874 -- # size=4096 00:20:48.446 05:39:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:48.446 05:39:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:48.446 05:39:52 -- common/autotest_common.sh@877 -- # return 0 00:20:48.446 05:39:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:48.446 05:39:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:48.446 05:39:52 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:48.706 05:39:52 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:48.706 05:39:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:48.706 05:39:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:48.706 05:39:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:48.706 05:39:52 -- bdev/nbd_common.sh@51 -- # local i 00:20:48.706 05:39:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:48.706 05:39:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:48.966 05:39:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:48.966 05:39:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:48.966 05:39:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:48.966 05:39:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:48.966 05:39:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:48.966 05:39:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:48.966 05:39:52 -- bdev/nbd_common.sh@41 -- # break 00:20:48.966 05:39:52 -- bdev/nbd_common.sh@45 -- # return 0 00:20:48.966 05:39:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:48.966 05:39:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:49.226 05:39:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:49.226 05:39:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:49.226 05:39:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:49.226 05:39:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:49.226 05:39:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:49.226 05:39:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:49.226 05:39:53 -- bdev/nbd_common.sh@41 -- # break 00:20:49.226 05:39:53 -- bdev/nbd_common.sh@45 -- # return 0 00:20:49.226 05:39:53 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:20:49.226 05:39:53 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:49.226 05:39:53 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:20:49.226 05:39:53 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:20:49.484 05:39:53 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:49.743 [2024-10-07 05:39:53.519874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:49.743 [2024-10-07 05:39:53.519957] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:49.743 [2024-10-07 05:39:53.519993] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:49.743 [2024-10-07 05:39:53.520022] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:49.743 [2024-10-07 05:39:53.522414] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:49.743 [2024-10-07 05:39:53.522483] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:49.743 [2024-10-07 05:39:53.522610] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:49.743 [2024-10-07 05:39:53.522679] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:49.743 BaseBdev1 00:20:49.743 05:39:53 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:49.743 05:39:53 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:20:49.743 05:39:53 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:20:50.001 05:39:53 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:50.002 [2024-10-07 05:39:53.963958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:50.002 [2024-10-07 05:39:53.964037] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:50.002 [2024-10-07 05:39:53.964076] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:20:50.002 [2024-10-07 05:39:53.964107] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:50.002 [2024-10-07 05:39:53.964558] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:50.002 [2024-10-07 05:39:53.964621] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:50.002 [2024-10-07 05:39:53.964717] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:20:50.002 [2024-10-07 05:39:53.964733] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:20:50.002 [2024-10-07 05:39:53.964741] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:50.002 [2024-10-07 05:39:53.964758] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state configuring 00:20:50.002 [2024-10-07 05:39:53.964822] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:50.002 BaseBdev2 00:20:50.260 05:39:53 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:50.260 05:39:54 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:50.519 [2024-10-07 05:39:54.420026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:50.519 [2024-10-07 05:39:54.420083] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:50.519 [2024-10-07 05:39:54.420121] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:50.519 [2024-10-07 05:39:54.420142] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:50.519 [2024-10-07 05:39:54.420597] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:50.519 [2024-10-07 05:39:54.420655] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:50.519 [2024-10-07 05:39:54.420758] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:20:50.519 [2024-10-07 05:39:54.420789] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:50.519 spare 00:20:50.519 05:39:54 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:50.519 05:39:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:50.519 05:39:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:50.519 05:39:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:50.519 05:39:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:50.519 05:39:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:50.519 05:39:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:50.519 05:39:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:50.519 05:39:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:50.519 05:39:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:50.519 05:39:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.519 05:39:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.778 [2024-10-07 05:39:54.520884] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:20:50.778 [2024-10-07 05:39:54.520907] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:50.778 [2024-10-07 05:39:54.521025] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:20:50.778 [2024-10-07 05:39:54.521399] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:20:50.778 [2024-10-07 05:39:54.521421] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:20:50.778 [2024-10-07 05:39:54.521556] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.778 05:39:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:50.778 "name": "raid_bdev1", 00:20:50.778 "uuid": "e36b99d8-9567-49f9-8a67-a36208ed1da2", 00:20:50.778 "strip_size_kb": 0, 00:20:50.778 "state": "online", 00:20:50.778 "raid_level": "raid1", 00:20:50.778 "superblock": true, 00:20:50.778 "num_base_bdevs": 2, 00:20:50.778 "num_base_bdevs_discovered": 2, 00:20:50.778 "num_base_bdevs_operational": 2, 00:20:50.778 "base_bdevs_list": [ 00:20:50.778 { 00:20:50.778 "name": "spare", 00:20:50.778 "uuid": "5e483563-bdae-5c3a-b724-6a1cb97162f5", 00:20:50.778 "is_configured": true, 00:20:50.778 "data_offset": 2048, 00:20:50.778 "data_size": 63488 00:20:50.778 }, 00:20:50.778 { 00:20:50.778 "name": "BaseBdev2", 00:20:50.778 "uuid": "1db2d665-2ca1-5fef-9833-bb256e2bba6e", 00:20:50.778 "is_configured": true, 00:20:50.778 "data_offset": 2048, 00:20:50.778 "data_size": 63488 00:20:50.778 } 00:20:50.778 ] 00:20:50.778 }' 00:20:50.778 05:39:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:50.778 05:39:54 -- common/autotest_common.sh@10 -- # set +x 00:20:51.345 05:39:55 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:51.345 05:39:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:51.345 05:39:55 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:51.345 05:39:55 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:51.345 05:39:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:51.345 05:39:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.345 05:39:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.603 05:39:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:51.603 "name": "raid_bdev1", 00:20:51.603 "uuid": "e36b99d8-9567-49f9-8a67-a36208ed1da2", 00:20:51.603 "strip_size_kb": 0, 00:20:51.603 "state": "online", 00:20:51.603 "raid_level": "raid1", 00:20:51.603 "superblock": true, 00:20:51.603 "num_base_bdevs": 2, 00:20:51.603 "num_base_bdevs_discovered": 2, 00:20:51.603 "num_base_bdevs_operational": 2, 00:20:51.603 "base_bdevs_list": [ 00:20:51.603 { 00:20:51.603 "name": "spare", 00:20:51.603 "uuid": "5e483563-bdae-5c3a-b724-6a1cb97162f5", 00:20:51.603 "is_configured": true, 00:20:51.603 "data_offset": 2048, 00:20:51.603 "data_size": 63488 00:20:51.603 }, 00:20:51.603 { 00:20:51.603 "name": "BaseBdev2", 00:20:51.603 "uuid": "1db2d665-2ca1-5fef-9833-bb256e2bba6e", 00:20:51.603 "is_configured": true, 00:20:51.603 "data_offset": 2048, 00:20:51.603 "data_size": 63488 00:20:51.603 } 00:20:51.604 ] 00:20:51.604 }' 00:20:51.604 05:39:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:51.604 05:39:55 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:51.604 05:39:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:51.864 05:39:55 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:51.864 05:39:55 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.864 05:39:55 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:51.864 05:39:55 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:20:51.864 05:39:55 -- bdev/bdev_raid.sh@709 -- # killprocess 165936 00:20:51.864 05:39:55 -- common/autotest_common.sh@926 -- # '[' -z 165936 ']' 00:20:51.864 05:39:55 -- common/autotest_common.sh@930 -- # kill -0 165936 00:20:51.864 05:39:55 -- common/autotest_common.sh@931 -- # uname 00:20:51.864 05:39:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:51.864 05:39:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 165936 00:20:51.864 killing process with pid 165936 00:20:51.864 Received shutdown signal, test time was about 60.000000 seconds 00:20:51.864 00:20:51.864 Latency(us) 00:20:51.864 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.864 =================================================================================================================== 00:20:51.864 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:51.864 05:39:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:51.864 05:39:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:51.864 05:39:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 165936' 00:20:51.864 05:39:55 -- common/autotest_common.sh@945 -- # kill 165936 00:20:51.864 05:39:55 -- common/autotest_common.sh@950 -- # wait 165936 00:20:51.864 [2024-10-07 05:39:55.817624] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:51.864 [2024-10-07 05:39:55.817701] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:51.864 [2024-10-07 05:39:55.817765] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:51.864 [2024-10-07 05:39:55.817776] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:20:52.123 [2024-10-07 05:39:56.018167] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:53.105 ************************************ 00:20:53.105 END TEST raid_rebuild_test_sb 00:20:53.105 ************************************ 00:20:53.105 05:39:57 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:53.105 00:20:53.105 real 0m25.614s 00:20:53.105 user 0m36.798s 00:20:53.105 sys 0m4.221s 00:20:53.105 05:39:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:53.105 05:39:57 -- common/autotest_common.sh@10 -- # set +x 00:20:53.378 05:39:57 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true 00:20:53.378 05:39:57 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:53.378 05:39:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:53.378 05:39:57 -- common/autotest_common.sh@10 -- # set +x 00:20:53.378 ************************************ 00:20:53.378 START TEST raid_rebuild_test_io 00:20:53.378 ************************************ 00:20:53.378 05:39:57 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false true 00:20:53.378 05:39:57 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:53.378 05:39:57 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:53.378 05:39:57 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:20:53.378 05:39:57 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:20:53.378 05:39:57 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:53.378 05:39:57 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:53.378 05:39:57 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:53.378 05:39:57 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:53.378 05:39:57 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:53.378 05:39:57 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:53.378 05:39:57 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:53.378 05:39:57 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:53.378 05:39:57 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:53.378 05:39:57 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:53.378 05:39:57 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:53.378 05:39:57 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:53.378 05:39:57 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:53.378 05:39:57 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:53.378 05:39:57 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:53.378 05:39:57 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:53.378 05:39:57 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:53.378 05:39:57 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:20:53.378 05:39:57 -- bdev/bdev_raid.sh@544 -- # raid_pid=167299 00:20:53.378 05:39:57 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:53.378 05:39:57 -- bdev/bdev_raid.sh@545 -- # waitforlisten 167299 /var/tmp/spdk-raid.sock 00:20:53.378 05:39:57 -- common/autotest_common.sh@819 -- # '[' -z 167299 ']' 00:20:53.378 05:39:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:53.378 05:39:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:53.378 05:39:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:53.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:53.378 05:39:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:53.378 05:39:57 -- common/autotest_common.sh@10 -- # set +x 00:20:53.378 [2024-10-07 05:39:57.192310] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:20:53.378 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:53.378 Zero copy mechanism will not be used. 00:20:53.378 [2024-10-07 05:39:57.192524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167299 ] 00:20:53.637 [2024-10-07 05:39:57.365156] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.637 [2024-10-07 05:39:57.568589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.896 [2024-10-07 05:39:57.755893] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:54.155 05:39:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:54.155 05:39:58 -- common/autotest_common.sh@852 -- # return 0 00:20:54.155 05:39:58 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:54.155 05:39:58 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:54.155 05:39:58 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:54.414 BaseBdev1 00:20:54.414 05:39:58 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:54.414 05:39:58 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:54.414 05:39:58 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:54.671 BaseBdev2 00:20:54.671 05:39:58 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:54.929 spare_malloc 00:20:54.929 05:39:58 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:55.188 spare_delay 00:20:55.188 05:39:59 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:55.446 [2024-10-07 05:39:59.294271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:55.446 [2024-10-07 05:39:59.294366] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:55.446 [2024-10-07 05:39:59.294407] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:20:55.446 [2024-10-07 05:39:59.294455] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:55.446 [2024-10-07 05:39:59.296908] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:55.446 [2024-10-07 05:39:59.296957] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:55.446 spare 00:20:55.446 05:39:59 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:55.706 [2024-10-07 05:39:59.534377] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:55.706 [2024-10-07 05:39:59.536238] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:55.706 [2024-10-07 05:39:59.536321] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:20:55.706 [2024-10-07 05:39:59.536333] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:55.706 [2024-10-07 05:39:59.536451] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:20:55.706 [2024-10-07 05:39:59.536772] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:20:55.706 [2024-10-07 05:39:59.536794] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:20:55.706 [2024-10-07 05:39:59.536938] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:55.706 05:39:59 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:55.706 05:39:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:55.706 05:39:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:55.706 05:39:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:55.706 05:39:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:55.706 05:39:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:55.706 05:39:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:55.706 05:39:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:55.706 05:39:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:55.706 05:39:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:55.706 05:39:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.706 05:39:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.964 05:39:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:55.964 "name": "raid_bdev1", 00:20:55.964 "uuid": "ee157929-801f-4bea-bd35-8b7144539ff1", 00:20:55.964 "strip_size_kb": 0, 00:20:55.964 "state": "online", 00:20:55.964 "raid_level": "raid1", 00:20:55.964 "superblock": false, 00:20:55.964 "num_base_bdevs": 2, 00:20:55.964 "num_base_bdevs_discovered": 2, 00:20:55.964 "num_base_bdevs_operational": 2, 00:20:55.964 "base_bdevs_list": [ 00:20:55.964 { 00:20:55.964 "name": "BaseBdev1", 00:20:55.964 "uuid": "9c04a9f9-ae78-4323-9d70-8d6c464c7925", 00:20:55.964 "is_configured": true, 00:20:55.964 "data_offset": 0, 00:20:55.964 "data_size": 65536 00:20:55.964 }, 00:20:55.964 { 00:20:55.964 "name": "BaseBdev2", 00:20:55.964 "uuid": "74cd849b-aae1-4ea8-855f-5eddc9312198", 00:20:55.964 "is_configured": true, 00:20:55.964 "data_offset": 0, 00:20:55.964 "data_size": 65536 00:20:55.964 } 00:20:55.964 ] 00:20:55.964 }' 00:20:55.964 05:39:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:55.964 05:39:59 -- common/autotest_common.sh@10 -- # set +x 00:20:56.531 05:40:00 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:56.531 05:40:00 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:56.790 [2024-10-07 05:40:00.646703] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:56.790 05:40:00 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:20:56.790 05:40:00 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.790 05:40:00 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:57.049 05:40:00 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:20:57.049 05:40:00 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:20:57.049 05:40:00 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:57.049 05:40:00 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:57.049 [2024-10-07 05:40:00.958165] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:20:57.049 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:57.049 Zero copy mechanism will not be used. 00:20:57.049 Running I/O for 60 seconds... 00:20:57.308 [2024-10-07 05:40:01.064765] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:57.308 [2024-10-07 05:40:01.070791] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005860 00:20:57.308 05:40:01 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:57.308 05:40:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:57.308 05:40:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:57.308 05:40:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:57.308 05:40:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:57.308 05:40:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:57.308 05:40:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:57.308 05:40:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:57.308 05:40:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:57.308 05:40:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:57.308 05:40:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.308 05:40:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.567 05:40:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:57.567 "name": "raid_bdev1", 00:20:57.567 "uuid": "ee157929-801f-4bea-bd35-8b7144539ff1", 00:20:57.567 "strip_size_kb": 0, 00:20:57.567 "state": "online", 00:20:57.567 "raid_level": "raid1", 00:20:57.567 "superblock": false, 00:20:57.567 "num_base_bdevs": 2, 00:20:57.567 "num_base_bdevs_discovered": 1, 00:20:57.567 "num_base_bdevs_operational": 1, 00:20:57.567 "base_bdevs_list": [ 00:20:57.567 { 00:20:57.567 "name": null, 00:20:57.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.567 "is_configured": false, 00:20:57.567 "data_offset": 0, 00:20:57.567 "data_size": 65536 00:20:57.567 }, 00:20:57.567 { 00:20:57.567 "name": "BaseBdev2", 00:20:57.567 "uuid": "74cd849b-aae1-4ea8-855f-5eddc9312198", 00:20:57.567 "is_configured": true, 00:20:57.567 "data_offset": 0, 00:20:57.567 "data_size": 65536 00:20:57.567 } 00:20:57.567 ] 00:20:57.567 }' 00:20:57.567 05:40:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:57.567 05:40:01 -- common/autotest_common.sh@10 -- # set +x 00:20:58.135 05:40:01 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:58.395 [2024-10-07 05:40:02.201423] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:58.395 [2024-10-07 05:40:02.201488] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:58.395 05:40:02 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:58.395 [2024-10-07 05:40:02.260492] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:20:58.395 [2024-10-07 05:40:02.262510] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:58.395 [2024-10-07 05:40:02.370745] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:58.395 [2024-10-07 05:40:02.371213] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:58.655 [2024-10-07 05:40:02.579898] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:58.655 [2024-10-07 05:40:02.580098] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:59.222 [2024-10-07 05:40:02.908159] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:59.222 [2024-10-07 05:40:02.908520] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:59.222 [2024-10-07 05:40:03.141504] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:59.495 05:40:03 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:59.495 05:40:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:59.495 05:40:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:59.495 05:40:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:59.495 05:40:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:59.495 05:40:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.495 05:40:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.755 05:40:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:59.755 "name": "raid_bdev1", 00:20:59.755 "uuid": "ee157929-801f-4bea-bd35-8b7144539ff1", 00:20:59.755 "strip_size_kb": 0, 00:20:59.755 "state": "online", 00:20:59.755 "raid_level": "raid1", 00:20:59.755 "superblock": false, 00:20:59.755 "num_base_bdevs": 2, 00:20:59.755 "num_base_bdevs_discovered": 2, 00:20:59.755 "num_base_bdevs_operational": 2, 00:20:59.755 "process": { 00:20:59.755 "type": "rebuild", 00:20:59.755 "target": "spare", 00:20:59.755 "progress": { 00:20:59.755 "blocks": 12288, 00:20:59.755 "percent": 18 00:20:59.755 } 00:20:59.755 }, 00:20:59.755 "base_bdevs_list": [ 00:20:59.755 { 00:20:59.755 "name": "spare", 00:20:59.755 "uuid": "ea165bbc-a21e-59f0-b018-0dfb0ed6d65a", 00:20:59.755 "is_configured": true, 00:20:59.755 "data_offset": 0, 00:20:59.755 "data_size": 65536 00:20:59.755 }, 00:20:59.755 { 00:20:59.755 "name": "BaseBdev2", 00:20:59.755 "uuid": "74cd849b-aae1-4ea8-855f-5eddc9312198", 00:20:59.755 "is_configured": true, 00:20:59.755 "data_offset": 0, 00:20:59.755 "data_size": 65536 00:20:59.755 } 00:20:59.755 ] 00:20:59.755 }' 00:20:59.755 05:40:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:59.755 [2024-10-07 05:40:03.493192] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:59.755 05:40:03 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:59.755 05:40:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:59.755 05:40:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:59.755 05:40:03 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:00.014 [2024-10-07 05:40:03.839710] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:00.014 [2024-10-07 05:40:03.947118] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:00.014 [2024-10-07 05:40:03.954348] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:00.273 [2024-10-07 05:40:03.993383] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005860 00:21:00.273 05:40:04 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:00.273 05:40:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:00.273 05:40:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:00.273 05:40:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:00.273 05:40:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:00.273 05:40:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:00.273 05:40:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:00.273 05:40:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:00.273 05:40:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:00.273 05:40:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:00.273 05:40:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.273 05:40:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.531 05:40:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:00.531 "name": "raid_bdev1", 00:21:00.531 "uuid": "ee157929-801f-4bea-bd35-8b7144539ff1", 00:21:00.531 "strip_size_kb": 0, 00:21:00.531 "state": "online", 00:21:00.531 "raid_level": "raid1", 00:21:00.531 "superblock": false, 00:21:00.531 "num_base_bdevs": 2, 00:21:00.531 "num_base_bdevs_discovered": 1, 00:21:00.531 "num_base_bdevs_operational": 1, 00:21:00.531 "base_bdevs_list": [ 00:21:00.531 { 00:21:00.531 "name": null, 00:21:00.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.531 "is_configured": false, 00:21:00.531 "data_offset": 0, 00:21:00.531 "data_size": 65536 00:21:00.531 }, 00:21:00.531 { 00:21:00.531 "name": "BaseBdev2", 00:21:00.531 "uuid": "74cd849b-aae1-4ea8-855f-5eddc9312198", 00:21:00.531 "is_configured": true, 00:21:00.531 "data_offset": 0, 00:21:00.531 "data_size": 65536 00:21:00.531 } 00:21:00.531 ] 00:21:00.531 }' 00:21:00.531 05:40:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:00.531 05:40:04 -- common/autotest_common.sh@10 -- # set +x 00:21:01.099 05:40:04 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:01.099 05:40:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:01.099 05:40:04 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:01.099 05:40:04 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:01.099 05:40:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:01.099 05:40:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.099 05:40:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.358 05:40:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:01.358 "name": "raid_bdev1", 00:21:01.358 "uuid": "ee157929-801f-4bea-bd35-8b7144539ff1", 00:21:01.358 "strip_size_kb": 0, 00:21:01.358 "state": "online", 00:21:01.358 "raid_level": "raid1", 00:21:01.358 "superblock": false, 00:21:01.358 "num_base_bdevs": 2, 00:21:01.358 "num_base_bdevs_discovered": 1, 00:21:01.358 "num_base_bdevs_operational": 1, 00:21:01.358 "base_bdevs_list": [ 00:21:01.358 { 00:21:01.358 "name": null, 00:21:01.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.358 "is_configured": false, 00:21:01.358 "data_offset": 0, 00:21:01.358 "data_size": 65536 00:21:01.358 }, 00:21:01.358 { 00:21:01.358 "name": "BaseBdev2", 00:21:01.358 "uuid": "74cd849b-aae1-4ea8-855f-5eddc9312198", 00:21:01.359 "is_configured": true, 00:21:01.359 "data_offset": 0, 00:21:01.359 "data_size": 65536 00:21:01.359 } 00:21:01.359 ] 00:21:01.359 }' 00:21:01.359 05:40:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:01.359 05:40:05 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:01.359 05:40:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:01.359 05:40:05 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:01.359 05:40:05 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:01.618 [2024-10-07 05:40:05.475458] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:01.618 [2024-10-07 05:40:05.475514] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:01.618 05:40:05 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:01.618 [2024-10-07 05:40:05.515646] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:21:01.618 [2024-10-07 05:40:05.517615] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:01.877 [2024-10-07 05:40:05.631507] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:01.877 [2024-10-07 05:40:05.631998] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:01.877 [2024-10-07 05:40:05.833425] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:01.877 [2024-10-07 05:40:05.833633] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:02.445 [2024-10-07 05:40:06.282836] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:02.704 05:40:06 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:02.704 05:40:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:02.704 05:40:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:02.704 05:40:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:02.704 05:40:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:02.704 05:40:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.704 05:40:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.704 [2024-10-07 05:40:06.528955] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:02.704 [2024-10-07 05:40:06.643000] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:02.963 05:40:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:02.963 "name": "raid_bdev1", 00:21:02.963 "uuid": "ee157929-801f-4bea-bd35-8b7144539ff1", 00:21:02.963 "strip_size_kb": 0, 00:21:02.963 "state": "online", 00:21:02.963 "raid_level": "raid1", 00:21:02.963 "superblock": false, 00:21:02.963 "num_base_bdevs": 2, 00:21:02.963 "num_base_bdevs_discovered": 2, 00:21:02.963 "num_base_bdevs_operational": 2, 00:21:02.963 "process": { 00:21:02.963 "type": "rebuild", 00:21:02.963 "target": "spare", 00:21:02.963 "progress": { 00:21:02.963 "blocks": 16384, 00:21:02.963 "percent": 25 00:21:02.963 } 00:21:02.963 }, 00:21:02.963 "base_bdevs_list": [ 00:21:02.963 { 00:21:02.963 "name": "spare", 00:21:02.963 "uuid": "ea165bbc-a21e-59f0-b018-0dfb0ed6d65a", 00:21:02.963 "is_configured": true, 00:21:02.963 "data_offset": 0, 00:21:02.963 "data_size": 65536 00:21:02.963 }, 00:21:02.964 { 00:21:02.964 "name": "BaseBdev2", 00:21:02.964 "uuid": "74cd849b-aae1-4ea8-855f-5eddc9312198", 00:21:02.964 "is_configured": true, 00:21:02.964 "data_offset": 0, 00:21:02.964 "data_size": 65536 00:21:02.964 } 00:21:02.964 ] 00:21:02.964 }' 00:21:02.964 05:40:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:02.964 05:40:06 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:02.964 05:40:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:02.964 05:40:06 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:02.964 05:40:06 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:21:02.964 05:40:06 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:21:02.964 05:40:06 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:02.964 05:40:06 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:21:02.964 05:40:06 -- bdev/bdev_raid.sh@657 -- # local timeout=446 00:21:02.964 05:40:06 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:02.964 05:40:06 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:02.964 05:40:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:02.964 05:40:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:02.964 05:40:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:02.964 05:40:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:02.964 05:40:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.964 05:40:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.964 [2024-10-07 05:40:06.864227] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:02.964 [2024-10-07 05:40:06.864653] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:03.222 [2024-10-07 05:40:06.985972] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:03.222 05:40:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:03.222 "name": "raid_bdev1", 00:21:03.222 "uuid": "ee157929-801f-4bea-bd35-8b7144539ff1", 00:21:03.222 "strip_size_kb": 0, 00:21:03.222 "state": "online", 00:21:03.222 "raid_level": "raid1", 00:21:03.222 "superblock": false, 00:21:03.222 "num_base_bdevs": 2, 00:21:03.222 "num_base_bdevs_discovered": 2, 00:21:03.222 "num_base_bdevs_operational": 2, 00:21:03.222 "process": { 00:21:03.222 "type": "rebuild", 00:21:03.222 "target": "spare", 00:21:03.222 "progress": { 00:21:03.222 "blocks": 22528, 00:21:03.222 "percent": 34 00:21:03.222 } 00:21:03.222 }, 00:21:03.222 "base_bdevs_list": [ 00:21:03.222 { 00:21:03.222 "name": "spare", 00:21:03.222 "uuid": "ea165bbc-a21e-59f0-b018-0dfb0ed6d65a", 00:21:03.222 "is_configured": true, 00:21:03.222 "data_offset": 0, 00:21:03.222 "data_size": 65536 00:21:03.222 }, 00:21:03.222 { 00:21:03.222 "name": "BaseBdev2", 00:21:03.222 "uuid": "74cd849b-aae1-4ea8-855f-5eddc9312198", 00:21:03.222 "is_configured": true, 00:21:03.222 "data_offset": 0, 00:21:03.222 "data_size": 65536 00:21:03.222 } 00:21:03.222 ] 00:21:03.222 }' 00:21:03.222 05:40:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:03.222 05:40:07 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:03.222 05:40:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:03.481 05:40:07 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:03.481 05:40:07 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:03.740 [2024-10-07 05:40:07.674613] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:21:04.309 [2024-10-07 05:40:08.022820] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:21:04.309 05:40:08 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:04.309 05:40:08 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:04.309 05:40:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:04.309 05:40:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:04.309 05:40:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:04.309 05:40:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:04.309 05:40:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.309 05:40:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.566 [2024-10-07 05:40:08.439042] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:21:04.566 05:40:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:04.566 "name": "raid_bdev1", 00:21:04.566 "uuid": "ee157929-801f-4bea-bd35-8b7144539ff1", 00:21:04.566 "strip_size_kb": 0, 00:21:04.566 "state": "online", 00:21:04.566 "raid_level": "raid1", 00:21:04.566 "superblock": false, 00:21:04.566 "num_base_bdevs": 2, 00:21:04.566 "num_base_bdevs_discovered": 2, 00:21:04.566 "num_base_bdevs_operational": 2, 00:21:04.566 "process": { 00:21:04.566 "type": "rebuild", 00:21:04.567 "target": "spare", 00:21:04.567 "progress": { 00:21:04.567 "blocks": 47104, 00:21:04.567 "percent": 71 00:21:04.567 } 00:21:04.567 }, 00:21:04.567 "base_bdevs_list": [ 00:21:04.567 { 00:21:04.567 "name": "spare", 00:21:04.567 "uuid": "ea165bbc-a21e-59f0-b018-0dfb0ed6d65a", 00:21:04.567 "is_configured": true, 00:21:04.567 "data_offset": 0, 00:21:04.567 "data_size": 65536 00:21:04.567 }, 00:21:04.567 { 00:21:04.567 "name": "BaseBdev2", 00:21:04.567 "uuid": "74cd849b-aae1-4ea8-855f-5eddc9312198", 00:21:04.567 "is_configured": true, 00:21:04.567 "data_offset": 0, 00:21:04.567 "data_size": 65536 00:21:04.567 } 00:21:04.567 ] 00:21:04.567 }' 00:21:04.567 05:40:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:04.567 05:40:08 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:04.567 05:40:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:04.824 05:40:08 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:04.824 05:40:08 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:05.762 [2024-10-07 05:40:09.515088] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:05.762 05:40:09 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:05.762 05:40:09 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:05.762 05:40:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:05.762 05:40:09 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:05.762 05:40:09 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:05.762 05:40:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:05.762 05:40:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.762 05:40:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.762 [2024-10-07 05:40:09.615155] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:05.762 [2024-10-07 05:40:09.616928] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:06.021 05:40:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:06.021 "name": "raid_bdev1", 00:21:06.021 "uuid": "ee157929-801f-4bea-bd35-8b7144539ff1", 00:21:06.021 "strip_size_kb": 0, 00:21:06.021 "state": "online", 00:21:06.021 "raid_level": "raid1", 00:21:06.021 "superblock": false, 00:21:06.021 "num_base_bdevs": 2, 00:21:06.021 "num_base_bdevs_discovered": 2, 00:21:06.021 "num_base_bdevs_operational": 2, 00:21:06.021 "base_bdevs_list": [ 00:21:06.021 { 00:21:06.021 "name": "spare", 00:21:06.021 "uuid": "ea165bbc-a21e-59f0-b018-0dfb0ed6d65a", 00:21:06.021 "is_configured": true, 00:21:06.021 "data_offset": 0, 00:21:06.021 "data_size": 65536 00:21:06.021 }, 00:21:06.021 { 00:21:06.021 "name": "BaseBdev2", 00:21:06.021 "uuid": "74cd849b-aae1-4ea8-855f-5eddc9312198", 00:21:06.021 "is_configured": true, 00:21:06.021 "data_offset": 0, 00:21:06.021 "data_size": 65536 00:21:06.021 } 00:21:06.021 ] 00:21:06.021 }' 00:21:06.021 05:40:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:06.021 05:40:09 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:06.021 05:40:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:06.021 05:40:09 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:06.021 05:40:09 -- bdev/bdev_raid.sh@660 -- # break 00:21:06.021 05:40:09 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:06.021 05:40:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:06.021 05:40:09 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:06.021 05:40:09 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:06.021 05:40:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:06.021 05:40:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.021 05:40:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.281 05:40:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:06.281 "name": "raid_bdev1", 00:21:06.281 "uuid": "ee157929-801f-4bea-bd35-8b7144539ff1", 00:21:06.281 "strip_size_kb": 0, 00:21:06.281 "state": "online", 00:21:06.281 "raid_level": "raid1", 00:21:06.281 "superblock": false, 00:21:06.281 "num_base_bdevs": 2, 00:21:06.281 "num_base_bdevs_discovered": 2, 00:21:06.281 "num_base_bdevs_operational": 2, 00:21:06.281 "base_bdevs_list": [ 00:21:06.281 { 00:21:06.281 "name": "spare", 00:21:06.281 "uuid": "ea165bbc-a21e-59f0-b018-0dfb0ed6d65a", 00:21:06.281 "is_configured": true, 00:21:06.281 "data_offset": 0, 00:21:06.281 "data_size": 65536 00:21:06.281 }, 00:21:06.281 { 00:21:06.281 "name": "BaseBdev2", 00:21:06.281 "uuid": "74cd849b-aae1-4ea8-855f-5eddc9312198", 00:21:06.281 "is_configured": true, 00:21:06.281 "data_offset": 0, 00:21:06.281 "data_size": 65536 00:21:06.281 } 00:21:06.281 ] 00:21:06.281 }' 00:21:06.281 05:40:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:06.281 05:40:10 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:06.281 05:40:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:06.281 05:40:10 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:06.281 05:40:10 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:06.281 05:40:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:06.281 05:40:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:06.281 05:40:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:06.281 05:40:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:06.281 05:40:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:06.282 05:40:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:06.282 05:40:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:06.282 05:40:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:06.282 05:40:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:06.282 05:40:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.282 05:40:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.868 05:40:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:06.868 "name": "raid_bdev1", 00:21:06.868 "uuid": "ee157929-801f-4bea-bd35-8b7144539ff1", 00:21:06.868 "strip_size_kb": 0, 00:21:06.868 "state": "online", 00:21:06.868 "raid_level": "raid1", 00:21:06.868 "superblock": false, 00:21:06.868 "num_base_bdevs": 2, 00:21:06.868 "num_base_bdevs_discovered": 2, 00:21:06.868 "num_base_bdevs_operational": 2, 00:21:06.868 "base_bdevs_list": [ 00:21:06.868 { 00:21:06.868 "name": "spare", 00:21:06.868 "uuid": "ea165bbc-a21e-59f0-b018-0dfb0ed6d65a", 00:21:06.868 "is_configured": true, 00:21:06.868 "data_offset": 0, 00:21:06.868 "data_size": 65536 00:21:06.868 }, 00:21:06.868 { 00:21:06.868 "name": "BaseBdev2", 00:21:06.868 "uuid": "74cd849b-aae1-4ea8-855f-5eddc9312198", 00:21:06.868 "is_configured": true, 00:21:06.868 "data_offset": 0, 00:21:06.868 "data_size": 65536 00:21:06.868 } 00:21:06.868 ] 00:21:06.868 }' 00:21:06.868 05:40:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:06.868 05:40:10 -- common/autotest_common.sh@10 -- # set +x 00:21:07.135 05:40:11 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:07.394 [2024-10-07 05:40:11.278226] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:07.394 [2024-10-07 05:40:11.278273] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:07.394 00:21:07.394 Latency(us) 00:21:07.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.394 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:07.394 raid_bdev1 : 10.38 124.98 374.94 0.00 0.00 10488.05 283.00 114866.73 00:21:07.394 =================================================================================================================== 00:21:07.394 Total : 124.98 374.94 0.00 0.00 10488.05 283.00 114866.73 00:21:07.394 [2024-10-07 05:40:11.353043] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:07.394 [2024-10-07 05:40:11.353100] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:07.394 [2024-10-07 05:40:11.353177] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:07.394 [2024-10-07 05:40:11.353191] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:21:07.394 0 00:21:07.394 05:40:11 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.394 05:40:11 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:07.653 05:40:11 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:07.653 05:40:11 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:21:07.653 05:40:11 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:07.653 05:40:11 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:07.653 05:40:11 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:07.653 05:40:11 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:07.653 05:40:11 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:07.653 05:40:11 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:07.653 05:40:11 -- bdev/nbd_common.sh@12 -- # local i 00:21:07.653 05:40:11 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:07.653 05:40:11 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:07.653 05:40:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:07.911 /dev/nbd0 00:21:07.911 05:40:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:07.911 05:40:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:07.911 05:40:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:07.911 05:40:11 -- common/autotest_common.sh@857 -- # local i 00:21:07.911 05:40:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:07.911 05:40:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:07.911 05:40:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:08.169 05:40:11 -- common/autotest_common.sh@861 -- # break 00:21:08.169 05:40:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:08.169 05:40:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:08.169 05:40:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:08.169 1+0 records in 00:21:08.169 1+0 records out 00:21:08.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000917895 s, 4.5 MB/s 00:21:08.169 05:40:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:08.169 05:40:11 -- common/autotest_common.sh@874 -- # size=4096 00:21:08.169 05:40:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:08.169 05:40:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:08.169 05:40:11 -- common/autotest_common.sh@877 -- # return 0 00:21:08.169 05:40:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:08.169 05:40:11 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:08.169 05:40:11 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:08.169 05:40:11 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:21:08.169 05:40:11 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:21:08.169 05:40:11 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:08.169 05:40:11 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:21:08.169 05:40:11 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:08.169 05:40:11 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:08.169 05:40:11 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:08.169 05:40:11 -- bdev/nbd_common.sh@12 -- # local i 00:21:08.169 05:40:11 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:08.169 05:40:11 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:08.169 05:40:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:21:08.427 /dev/nbd1 00:21:08.427 05:40:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:08.427 05:40:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:08.427 05:40:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:08.427 05:40:12 -- common/autotest_common.sh@857 -- # local i 00:21:08.427 05:40:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:08.427 05:40:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:08.427 05:40:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:08.427 05:40:12 -- common/autotest_common.sh@861 -- # break 00:21:08.427 05:40:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:08.427 05:40:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:08.427 05:40:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:08.427 1+0 records in 00:21:08.427 1+0 records out 00:21:08.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000922704 s, 4.4 MB/s 00:21:08.427 05:40:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:08.427 05:40:12 -- common/autotest_common.sh@874 -- # size=4096 00:21:08.428 05:40:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:08.428 05:40:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:08.428 05:40:12 -- common/autotest_common.sh@877 -- # return 0 00:21:08.428 05:40:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:08.428 05:40:12 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:08.428 05:40:12 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:08.428 05:40:12 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:08.428 05:40:12 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:08.428 05:40:12 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:08.428 05:40:12 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:08.428 05:40:12 -- bdev/nbd_common.sh@51 -- # local i 00:21:08.428 05:40:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:08.428 05:40:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:08.994 05:40:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:08.994 05:40:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:08.994 05:40:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:08.994 05:40:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:08.994 05:40:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:08.994 05:40:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:08.994 05:40:12 -- bdev/nbd_common.sh@41 -- # break 00:21:08.994 05:40:12 -- bdev/nbd_common.sh@45 -- # return 0 00:21:08.994 05:40:12 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:08.994 05:40:12 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:08.994 05:40:12 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:08.994 05:40:12 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:08.994 05:40:12 -- bdev/nbd_common.sh@51 -- # local i 00:21:08.994 05:40:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:08.994 05:40:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:08.994 05:40:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:08.994 05:40:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:08.994 05:40:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:08.994 05:40:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:08.994 05:40:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:08.994 05:40:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:08.994 05:40:12 -- bdev/nbd_common.sh@41 -- # break 00:21:08.994 05:40:12 -- bdev/nbd_common.sh@45 -- # return 0 00:21:08.994 05:40:12 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:21:08.994 05:40:12 -- bdev/bdev_raid.sh@709 -- # killprocess 167299 00:21:08.994 05:40:12 -- common/autotest_common.sh@926 -- # '[' -z 167299 ']' 00:21:08.994 05:40:12 -- common/autotest_common.sh@930 -- # kill -0 167299 00:21:08.994 05:40:12 -- common/autotest_common.sh@931 -- # uname 00:21:09.253 05:40:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:09.253 05:40:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 167299 00:21:09.253 05:40:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:09.253 05:40:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:09.253 05:40:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 167299' 00:21:09.253 killing process with pid 167299 00:21:09.253 05:40:12 -- common/autotest_common.sh@945 -- # kill 167299 00:21:09.253 Received shutdown signal, test time was about 12.035031 seconds 00:21:09.253 00:21:09.253 Latency(us) 00:21:09.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.253 =================================================================================================================== 00:21:09.253 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:09.253 [2024-10-07 05:40:12.995425] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:09.253 05:40:12 -- common/autotest_common.sh@950 -- # wait 167299 00:21:09.253 [2024-10-07 05:40:13.153575] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:10.631 ************************************ 00:21:10.631 END TEST raid_rebuild_test_io 00:21:10.631 ************************************ 00:21:10.631 05:40:14 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:10.631 00:21:10.631 real 0m17.105s 00:21:10.631 user 0m26.549s 00:21:10.631 sys 0m1.945s 00:21:10.631 05:40:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:10.631 05:40:14 -- common/autotest_common.sh@10 -- # set +x 00:21:10.631 05:40:14 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true 00:21:10.631 05:40:14 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:10.631 05:40:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:10.631 05:40:14 -- common/autotest_common.sh@10 -- # set +x 00:21:10.631 ************************************ 00:21:10.631 START TEST raid_rebuild_test_sb_io 00:21:10.631 ************************************ 00:21:10.631 05:40:14 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true true 00:21:10.631 05:40:14 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:10.631 05:40:14 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:21:10.631 05:40:14 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:21:10.631 05:40:14 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:21:10.631 05:40:14 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:10.631 05:40:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:10.631 05:40:14 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:10.631 05:40:14 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:10.631 05:40:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:10.631 05:40:14 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:10.631 05:40:14 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:10.631 05:40:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:10.631 05:40:14 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:10.631 05:40:14 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:10.631 05:40:14 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:10.631 05:40:14 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:10.631 05:40:14 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:10.631 05:40:14 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:10.632 05:40:14 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:10.632 05:40:14 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:10.632 05:40:14 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:10.632 05:40:14 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:21:10.632 05:40:14 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:21:10.632 05:40:14 -- bdev/bdev_raid.sh@544 -- # raid_pid=167765 00:21:10.632 05:40:14 -- bdev/bdev_raid.sh@545 -- # waitforlisten 167765 /var/tmp/spdk-raid.sock 00:21:10.632 05:40:14 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:10.632 05:40:14 -- common/autotest_common.sh@819 -- # '[' -z 167765 ']' 00:21:10.632 05:40:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:10.632 05:40:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:10.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:10.632 05:40:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:10.632 05:40:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:10.632 05:40:14 -- common/autotest_common.sh@10 -- # set +x 00:21:10.632 [2024-10-07 05:40:14.333809] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:21:10.632 [2024-10-07 05:40:14.333958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167765 ] 00:21:10.632 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:10.632 Zero copy mechanism will not be used. 00:21:10.632 [2024-10-07 05:40:14.486822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.891 [2024-10-07 05:40:14.681098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.891 [2024-10-07 05:40:14.866435] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:11.459 05:40:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:11.459 05:40:15 -- common/autotest_common.sh@852 -- # return 0 00:21:11.459 05:40:15 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:11.459 05:40:15 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:11.459 05:40:15 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:11.717 BaseBdev1_malloc 00:21:11.717 05:40:15 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:11.976 [2024-10-07 05:40:15.765768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:11.976 [2024-10-07 05:40:15.765859] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:11.976 [2024-10-07 05:40:15.765900] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:21:11.976 [2024-10-07 05:40:15.765953] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:11.976 [2024-10-07 05:40:15.768426] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:11.976 [2024-10-07 05:40:15.768494] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:11.976 BaseBdev1 00:21:11.976 05:40:15 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:11.976 05:40:15 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:11.976 05:40:15 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:12.235 BaseBdev2_malloc 00:21:12.235 05:40:16 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:12.235 [2024-10-07 05:40:16.197446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:12.235 [2024-10-07 05:40:16.197526] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:12.235 [2024-10-07 05:40:16.197570] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:12.235 [2024-10-07 05:40:16.197625] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:12.235 [2024-10-07 05:40:16.199935] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:12.235 [2024-10-07 05:40:16.199986] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:12.235 BaseBdev2 00:21:12.235 05:40:16 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:12.494 spare_malloc 00:21:12.494 05:40:16 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:12.752 spare_delay 00:21:12.752 05:40:16 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:13.011 [2024-10-07 05:40:16.806432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:13.011 [2024-10-07 05:40:16.806534] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:13.011 [2024-10-07 05:40:16.806580] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:21:13.011 [2024-10-07 05:40:16.806627] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:13.011 [2024-10-07 05:40:16.809050] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:13.011 [2024-10-07 05:40:16.809107] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:13.011 spare 00:21:13.011 05:40:16 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:21:13.270 [2024-10-07 05:40:17.050634] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:13.270 [2024-10-07 05:40:17.052709] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:13.270 [2024-10-07 05:40:17.053057] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:21:13.270 [2024-10-07 05:40:17.053073] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:13.270 [2024-10-07 05:40:17.053274] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:21:13.270 [2024-10-07 05:40:17.053646] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:21:13.270 [2024-10-07 05:40:17.053663] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:21:13.270 [2024-10-07 05:40:17.053842] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:13.270 05:40:17 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:13.270 05:40:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:13.270 05:40:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:13.270 05:40:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:13.270 05:40:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:13.270 05:40:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:13.270 05:40:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:13.270 05:40:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:13.270 05:40:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:13.270 05:40:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:13.270 05:40:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.270 05:40:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.270 05:40:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:13.270 "name": "raid_bdev1", 00:21:13.270 "uuid": "1868e39e-b3e6-49c3-864a-92d897c62457", 00:21:13.270 "strip_size_kb": 0, 00:21:13.270 "state": "online", 00:21:13.270 "raid_level": "raid1", 00:21:13.270 "superblock": true, 00:21:13.270 "num_base_bdevs": 2, 00:21:13.270 "num_base_bdevs_discovered": 2, 00:21:13.270 "num_base_bdevs_operational": 2, 00:21:13.270 "base_bdevs_list": [ 00:21:13.270 { 00:21:13.270 "name": "BaseBdev1", 00:21:13.270 "uuid": "45c25616-b792-5ad9-a41e-fde96b1b2f92", 00:21:13.270 "is_configured": true, 00:21:13.270 "data_offset": 2048, 00:21:13.270 "data_size": 63488 00:21:13.270 }, 00:21:13.270 { 00:21:13.270 "name": "BaseBdev2", 00:21:13.270 "uuid": "16408055-73ca-5dfc-85b9-65e1c286a983", 00:21:13.270 "is_configured": true, 00:21:13.270 "data_offset": 2048, 00:21:13.270 "data_size": 63488 00:21:13.270 } 00:21:13.270 ] 00:21:13.270 }' 00:21:13.270 05:40:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:13.270 05:40:17 -- common/autotest_common.sh@10 -- # set +x 00:21:14.206 05:40:17 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:14.206 05:40:17 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:14.206 [2024-10-07 05:40:18.082921] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:14.206 05:40:18 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:21:14.206 05:40:18 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.206 05:40:18 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:14.473 05:40:18 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:21:14.473 05:40:18 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:21:14.473 05:40:18 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:14.473 05:40:18 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:14.736 [2024-10-07 05:40:18.455078] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:21:14.736 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:14.736 Zero copy mechanism will not be used. 00:21:14.736 Running I/O for 60 seconds... 00:21:14.736 [2024-10-07 05:40:18.577795] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:14.736 [2024-10-07 05:40:18.590196] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00 00:21:14.736 05:40:18 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:14.736 05:40:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:14.736 05:40:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:14.736 05:40:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:14.736 05:40:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:14.736 05:40:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:14.736 05:40:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:14.736 05:40:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:14.736 05:40:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:14.736 05:40:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:14.736 05:40:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.736 05:40:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.995 05:40:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:14.995 "name": "raid_bdev1", 00:21:14.995 "uuid": "1868e39e-b3e6-49c3-864a-92d897c62457", 00:21:14.995 "strip_size_kb": 0, 00:21:14.995 "state": "online", 00:21:14.995 "raid_level": "raid1", 00:21:14.995 "superblock": true, 00:21:14.995 "num_base_bdevs": 2, 00:21:14.995 "num_base_bdevs_discovered": 1, 00:21:14.995 "num_base_bdevs_operational": 1, 00:21:14.995 "base_bdevs_list": [ 00:21:14.995 { 00:21:14.995 "name": null, 00:21:14.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.995 "is_configured": false, 00:21:14.995 "data_offset": 2048, 00:21:14.995 "data_size": 63488 00:21:14.995 }, 00:21:14.995 { 00:21:14.995 "name": "BaseBdev2", 00:21:14.995 "uuid": "16408055-73ca-5dfc-85b9-65e1c286a983", 00:21:14.995 "is_configured": true, 00:21:14.995 "data_offset": 2048, 00:21:14.995 "data_size": 63488 00:21:14.995 } 00:21:14.995 ] 00:21:14.995 }' 00:21:14.995 05:40:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:14.995 05:40:18 -- common/autotest_common.sh@10 -- # set +x 00:21:15.932 05:40:19 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:15.932 [2024-10-07 05:40:19.802895] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:15.932 [2024-10-07 05:40:19.802961] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:15.932 05:40:19 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:15.932 [2024-10-07 05:40:19.855988] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:21:15.932 [2024-10-07 05:40:19.858072] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:16.192 [2024-10-07 05:40:19.983092] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:16.192 [2024-10-07 05:40:19.983481] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:16.452 [2024-10-07 05:40:20.215956] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:16.452 [2024-10-07 05:40:20.216114] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:16.711 [2024-10-07 05:40:20.547461] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:16.970 [2024-10-07 05:40:20.768607] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:16.970 05:40:20 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:16.970 05:40:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:16.970 05:40:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:16.970 05:40:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:16.970 05:40:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:16.970 05:40:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.970 05:40:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.229 05:40:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:17.229 "name": "raid_bdev1", 00:21:17.229 "uuid": "1868e39e-b3e6-49c3-864a-92d897c62457", 00:21:17.229 "strip_size_kb": 0, 00:21:17.229 "state": "online", 00:21:17.229 "raid_level": "raid1", 00:21:17.229 "superblock": true, 00:21:17.229 "num_base_bdevs": 2, 00:21:17.229 "num_base_bdevs_discovered": 2, 00:21:17.229 "num_base_bdevs_operational": 2, 00:21:17.229 "process": { 00:21:17.229 "type": "rebuild", 00:21:17.229 "target": "spare", 00:21:17.229 "progress": { 00:21:17.229 "blocks": 14336, 00:21:17.229 "percent": 22 00:21:17.229 } 00:21:17.229 }, 00:21:17.229 "base_bdevs_list": [ 00:21:17.229 { 00:21:17.229 "name": "spare", 00:21:17.229 "uuid": "6e3b3455-5877-542f-aa0b-efe44242c27c", 00:21:17.229 "is_configured": true, 00:21:17.229 "data_offset": 2048, 00:21:17.229 "data_size": 63488 00:21:17.229 }, 00:21:17.229 { 00:21:17.229 "name": "BaseBdev2", 00:21:17.229 "uuid": "16408055-73ca-5dfc-85b9-65e1c286a983", 00:21:17.229 "is_configured": true, 00:21:17.229 "data_offset": 2048, 00:21:17.229 "data_size": 63488 00:21:17.229 } 00:21:17.229 ] 00:21:17.229 }' 00:21:17.229 05:40:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:17.229 [2024-10-07 05:40:21.134848] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:17.229 05:40:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:17.229 05:40:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:17.229 05:40:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:17.229 05:40:21 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:17.489 [2024-10-07 05:40:21.438767] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:17.489 [2024-10-07 05:40:21.466515] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:17.749 [2024-10-07 05:40:21.467897] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:17.749 [2024-10-07 05:40:21.481434] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:17.749 [2024-10-07 05:40:21.514814] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00 00:21:17.749 05:40:21 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:17.749 05:40:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:17.749 05:40:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:17.749 05:40:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:17.749 05:40:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:17.749 05:40:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:17.749 05:40:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:17.749 05:40:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:17.749 05:40:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:17.749 05:40:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:17.749 05:40:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.749 05:40:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.009 05:40:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:18.009 "name": "raid_bdev1", 00:21:18.009 "uuid": "1868e39e-b3e6-49c3-864a-92d897c62457", 00:21:18.009 "strip_size_kb": 0, 00:21:18.009 "state": "online", 00:21:18.009 "raid_level": "raid1", 00:21:18.009 "superblock": true, 00:21:18.009 "num_base_bdevs": 2, 00:21:18.009 "num_base_bdevs_discovered": 1, 00:21:18.009 "num_base_bdevs_operational": 1, 00:21:18.009 "base_bdevs_list": [ 00:21:18.009 { 00:21:18.009 "name": null, 00:21:18.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.009 "is_configured": false, 00:21:18.009 "data_offset": 2048, 00:21:18.009 "data_size": 63488 00:21:18.009 }, 00:21:18.009 { 00:21:18.009 "name": "BaseBdev2", 00:21:18.009 "uuid": "16408055-73ca-5dfc-85b9-65e1c286a983", 00:21:18.009 "is_configured": true, 00:21:18.009 "data_offset": 2048, 00:21:18.009 "data_size": 63488 00:21:18.009 } 00:21:18.009 ] 00:21:18.009 }' 00:21:18.009 05:40:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:18.009 05:40:21 -- common/autotest_common.sh@10 -- # set +x 00:21:18.579 05:40:22 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:18.579 05:40:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:18.579 05:40:22 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:18.579 05:40:22 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:18.579 05:40:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:18.579 05:40:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.579 05:40:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.845 05:40:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:18.845 "name": "raid_bdev1", 00:21:18.846 "uuid": "1868e39e-b3e6-49c3-864a-92d897c62457", 00:21:18.846 "strip_size_kb": 0, 00:21:18.846 "state": "online", 00:21:18.846 "raid_level": "raid1", 00:21:18.846 "superblock": true, 00:21:18.846 "num_base_bdevs": 2, 00:21:18.846 "num_base_bdevs_discovered": 1, 00:21:18.846 "num_base_bdevs_operational": 1, 00:21:18.846 "base_bdevs_list": [ 00:21:18.846 { 00:21:18.846 "name": null, 00:21:18.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.846 "is_configured": false, 00:21:18.846 "data_offset": 2048, 00:21:18.846 "data_size": 63488 00:21:18.846 }, 00:21:18.846 { 00:21:18.846 "name": "BaseBdev2", 00:21:18.846 "uuid": "16408055-73ca-5dfc-85b9-65e1c286a983", 00:21:18.846 "is_configured": true, 00:21:18.846 "data_offset": 2048, 00:21:18.846 "data_size": 63488 00:21:18.846 } 00:21:18.846 ] 00:21:18.846 }' 00:21:18.846 05:40:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:18.846 05:40:22 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:18.846 05:40:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:18.846 05:40:22 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:18.846 05:40:22 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:19.106 [2024-10-07 05:40:23.048339] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:19.106 [2024-10-07 05:40:23.048402] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:19.106 [2024-10-07 05:40:23.077134] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:21:19.106 [2024-10-07 05:40:23.079266] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:19.365 05:40:23 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:19.365 [2024-10-07 05:40:23.201982] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:19.365 [2024-10-07 05:40:23.202358] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:19.624 [2024-10-07 05:40:23.411378] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:19.624 [2024-10-07 05:40:23.411562] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:19.884 [2024-10-07 05:40:23.760058] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:20.143 [2024-10-07 05:40:23.886844] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:20.143 [2024-10-07 05:40:23.887033] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:20.143 05:40:24 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:20.143 05:40:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:20.143 05:40:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:20.143 05:40:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:20.143 05:40:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:20.143 05:40:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.143 05:40:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.143 [2024-10-07 05:40:24.102155] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:20.403 05:40:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:20.403 "name": "raid_bdev1", 00:21:20.403 "uuid": "1868e39e-b3e6-49c3-864a-92d897c62457", 00:21:20.403 "strip_size_kb": 0, 00:21:20.403 "state": "online", 00:21:20.403 "raid_level": "raid1", 00:21:20.403 "superblock": true, 00:21:20.403 "num_base_bdevs": 2, 00:21:20.403 "num_base_bdevs_discovered": 2, 00:21:20.403 "num_base_bdevs_operational": 2, 00:21:20.403 "process": { 00:21:20.403 "type": "rebuild", 00:21:20.403 "target": "spare", 00:21:20.403 "progress": { 00:21:20.403 "blocks": 16384, 00:21:20.403 "percent": 25 00:21:20.403 } 00:21:20.403 }, 00:21:20.403 "base_bdevs_list": [ 00:21:20.403 { 00:21:20.403 "name": "spare", 00:21:20.403 "uuid": "6e3b3455-5877-542f-aa0b-efe44242c27c", 00:21:20.403 "is_configured": true, 00:21:20.403 "data_offset": 2048, 00:21:20.403 "data_size": 63488 00:21:20.403 }, 00:21:20.403 { 00:21:20.403 "name": "BaseBdev2", 00:21:20.403 "uuid": "16408055-73ca-5dfc-85b9-65e1c286a983", 00:21:20.403 "is_configured": true, 00:21:20.403 "data_offset": 2048, 00:21:20.403 "data_size": 63488 00:21:20.403 } 00:21:20.403 ] 00:21:20.403 }' 00:21:20.403 05:40:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:20.708 05:40:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:20.708 05:40:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:20.708 05:40:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:20.708 05:40:24 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:21:20.708 05:40:24 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:21:20.708 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:21:20.708 05:40:24 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:21:20.708 05:40:24 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:20.708 05:40:24 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:21:20.708 05:40:24 -- bdev/bdev_raid.sh@657 -- # local timeout=464 00:21:20.708 05:40:24 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:20.708 05:40:24 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:20.708 05:40:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:20.708 05:40:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:20.708 05:40:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:20.708 05:40:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:20.708 05:40:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.708 05:40:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.708 [2024-10-07 05:40:24.530351] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:21.001 05:40:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:21.001 "name": "raid_bdev1", 00:21:21.001 "uuid": "1868e39e-b3e6-49c3-864a-92d897c62457", 00:21:21.001 "strip_size_kb": 0, 00:21:21.001 "state": "online", 00:21:21.001 "raid_level": "raid1", 00:21:21.001 "superblock": true, 00:21:21.001 "num_base_bdevs": 2, 00:21:21.001 "num_base_bdevs_discovered": 2, 00:21:21.001 "num_base_bdevs_operational": 2, 00:21:21.002 "process": { 00:21:21.002 "type": "rebuild", 00:21:21.002 "target": "spare", 00:21:21.002 "progress": { 00:21:21.002 "blocks": 20480, 00:21:21.002 "percent": 32 00:21:21.002 } 00:21:21.002 }, 00:21:21.002 "base_bdevs_list": [ 00:21:21.002 { 00:21:21.002 "name": "spare", 00:21:21.002 "uuid": "6e3b3455-5877-542f-aa0b-efe44242c27c", 00:21:21.002 "is_configured": true, 00:21:21.002 "data_offset": 2048, 00:21:21.002 "data_size": 63488 00:21:21.002 }, 00:21:21.002 { 00:21:21.002 "name": "BaseBdev2", 00:21:21.002 "uuid": "16408055-73ca-5dfc-85b9-65e1c286a983", 00:21:21.002 "is_configured": true, 00:21:21.002 "data_offset": 2048, 00:21:21.002 "data_size": 63488 00:21:21.002 } 00:21:21.002 ] 00:21:21.002 }' 00:21:21.002 05:40:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:21.002 05:40:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:21.002 05:40:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:21.002 [2024-10-07 05:40:24.766487] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:21.002 [2024-10-07 05:40:24.766841] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:21.002 05:40:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:21.002 05:40:24 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:21.260 [2024-10-07 05:40:25.010519] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:21.260 [2024-10-07 05:40:25.130871] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:21:21.519 [2024-10-07 05:40:25.467687] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:21:22.088 05:40:25 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:22.088 05:40:25 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:22.088 05:40:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:22.088 05:40:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:22.088 05:40:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:22.088 05:40:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:22.088 05:40:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.088 05:40:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.088 [2024-10-07 05:40:25.807047] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:22.088 05:40:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:22.088 "name": "raid_bdev1", 00:21:22.088 "uuid": "1868e39e-b3e6-49c3-864a-92d897c62457", 00:21:22.088 "strip_size_kb": 0, 00:21:22.088 "state": "online", 00:21:22.088 "raid_level": "raid1", 00:21:22.088 "superblock": true, 00:21:22.088 "num_base_bdevs": 2, 00:21:22.088 "num_base_bdevs_discovered": 2, 00:21:22.088 "num_base_bdevs_operational": 2, 00:21:22.088 "process": { 00:21:22.088 "type": "rebuild", 00:21:22.088 "target": "spare", 00:21:22.088 "progress": { 00:21:22.088 "blocks": 38912, 00:21:22.088 "percent": 61 00:21:22.088 } 00:21:22.088 }, 00:21:22.088 "base_bdevs_list": [ 00:21:22.088 { 00:21:22.088 "name": "spare", 00:21:22.088 "uuid": "6e3b3455-5877-542f-aa0b-efe44242c27c", 00:21:22.088 "is_configured": true, 00:21:22.088 "data_offset": 2048, 00:21:22.088 "data_size": 63488 00:21:22.088 }, 00:21:22.088 { 00:21:22.088 "name": "BaseBdev2", 00:21:22.088 "uuid": "16408055-73ca-5dfc-85b9-65e1c286a983", 00:21:22.088 "is_configured": true, 00:21:22.088 "data_offset": 2048, 00:21:22.088 "data_size": 63488 00:21:22.088 } 00:21:22.088 ] 00:21:22.088 }' 00:21:22.088 05:40:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:22.088 [2024-10-07 05:40:26.035624] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:21:22.347 05:40:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:22.347 05:40:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:22.347 05:40:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:22.347 05:40:26 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:22.916 [2024-10-07 05:40:26.592242] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:21:22.916 [2024-10-07 05:40:26.699443] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:21:23.175 [2024-10-07 05:40:27.024701] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:21:23.175 05:40:27 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:23.175 05:40:27 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:23.175 05:40:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:23.175 05:40:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:23.175 05:40:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:23.175 05:40:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:23.175 05:40:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.175 05:40:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.435 [2024-10-07 05:40:27.355870] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:23.435 05:40:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:23.435 "name": "raid_bdev1", 00:21:23.435 "uuid": "1868e39e-b3e6-49c3-864a-92d897c62457", 00:21:23.435 "strip_size_kb": 0, 00:21:23.435 "state": "online", 00:21:23.435 "raid_level": "raid1", 00:21:23.435 "superblock": true, 00:21:23.435 "num_base_bdevs": 2, 00:21:23.435 "num_base_bdevs_discovered": 2, 00:21:23.435 "num_base_bdevs_operational": 2, 00:21:23.435 "process": { 00:21:23.435 "type": "rebuild", 00:21:23.435 "target": "spare", 00:21:23.435 "progress": { 00:21:23.435 "blocks": 63488, 00:21:23.435 "percent": 100 00:21:23.435 } 00:21:23.435 }, 00:21:23.435 "base_bdevs_list": [ 00:21:23.435 { 00:21:23.435 "name": "spare", 00:21:23.435 "uuid": "6e3b3455-5877-542f-aa0b-efe44242c27c", 00:21:23.435 "is_configured": true, 00:21:23.435 "data_offset": 2048, 00:21:23.435 "data_size": 63488 00:21:23.435 }, 00:21:23.435 { 00:21:23.435 "name": "BaseBdev2", 00:21:23.435 "uuid": "16408055-73ca-5dfc-85b9-65e1c286a983", 00:21:23.435 "is_configured": true, 00:21:23.435 "data_offset": 2048, 00:21:23.435 "data_size": 63488 00:21:23.435 } 00:21:23.435 ] 00:21:23.435 }' 00:21:23.435 05:40:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:23.695 05:40:27 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:23.695 05:40:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:23.695 [2024-10-07 05:40:27.455947] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:23.695 [2024-10-07 05:40:27.457611] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:23.695 05:40:27 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:23.695 05:40:27 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:24.633 05:40:28 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:24.633 05:40:28 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:24.633 05:40:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:24.633 05:40:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:24.633 05:40:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:24.633 05:40:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:24.633 05:40:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.633 05:40:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.892 05:40:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:24.892 "name": "raid_bdev1", 00:21:24.892 "uuid": "1868e39e-b3e6-49c3-864a-92d897c62457", 00:21:24.892 "strip_size_kb": 0, 00:21:24.892 "state": "online", 00:21:24.892 "raid_level": "raid1", 00:21:24.892 "superblock": true, 00:21:24.892 "num_base_bdevs": 2, 00:21:24.892 "num_base_bdevs_discovered": 2, 00:21:24.892 "num_base_bdevs_operational": 2, 00:21:24.892 "base_bdevs_list": [ 00:21:24.892 { 00:21:24.892 "name": "spare", 00:21:24.892 "uuid": "6e3b3455-5877-542f-aa0b-efe44242c27c", 00:21:24.892 "is_configured": true, 00:21:24.892 "data_offset": 2048, 00:21:24.892 "data_size": 63488 00:21:24.892 }, 00:21:24.892 { 00:21:24.892 "name": "BaseBdev2", 00:21:24.892 "uuid": "16408055-73ca-5dfc-85b9-65e1c286a983", 00:21:24.892 "is_configured": true, 00:21:24.892 "data_offset": 2048, 00:21:24.892 "data_size": 63488 00:21:24.892 } 00:21:24.892 ] 00:21:24.892 }' 00:21:24.892 05:40:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:24.892 05:40:28 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:24.892 05:40:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:24.892 05:40:28 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:24.892 05:40:28 -- bdev/bdev_raid.sh@660 -- # break 00:21:24.892 05:40:28 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:24.892 05:40:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:24.892 05:40:28 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:24.892 05:40:28 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:24.892 05:40:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:24.892 05:40:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.892 05:40:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.151 05:40:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:25.151 "name": "raid_bdev1", 00:21:25.151 "uuid": "1868e39e-b3e6-49c3-864a-92d897c62457", 00:21:25.151 "strip_size_kb": 0, 00:21:25.151 "state": "online", 00:21:25.151 "raid_level": "raid1", 00:21:25.151 "superblock": true, 00:21:25.151 "num_base_bdevs": 2, 00:21:25.151 "num_base_bdevs_discovered": 2, 00:21:25.151 "num_base_bdevs_operational": 2, 00:21:25.151 "base_bdevs_list": [ 00:21:25.151 { 00:21:25.151 "name": "spare", 00:21:25.151 "uuid": "6e3b3455-5877-542f-aa0b-efe44242c27c", 00:21:25.151 "is_configured": true, 00:21:25.151 "data_offset": 2048, 00:21:25.151 "data_size": 63488 00:21:25.151 }, 00:21:25.151 { 00:21:25.151 "name": "BaseBdev2", 00:21:25.151 "uuid": "16408055-73ca-5dfc-85b9-65e1c286a983", 00:21:25.151 "is_configured": true, 00:21:25.151 "data_offset": 2048, 00:21:25.151 "data_size": 63488 00:21:25.151 } 00:21:25.151 ] 00:21:25.151 }' 00:21:25.151 05:40:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:25.410 05:40:29 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:25.410 05:40:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:25.410 05:40:29 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:25.410 05:40:29 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:25.410 05:40:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:25.410 05:40:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:25.410 05:40:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:25.410 05:40:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:25.410 05:40:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:25.410 05:40:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:25.410 05:40:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:25.410 05:40:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:25.410 05:40:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:25.410 05:40:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.410 05:40:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.670 05:40:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:25.670 "name": "raid_bdev1", 00:21:25.670 "uuid": "1868e39e-b3e6-49c3-864a-92d897c62457", 00:21:25.670 "strip_size_kb": 0, 00:21:25.670 "state": "online", 00:21:25.670 "raid_level": "raid1", 00:21:25.670 "superblock": true, 00:21:25.670 "num_base_bdevs": 2, 00:21:25.670 "num_base_bdevs_discovered": 2, 00:21:25.670 "num_base_bdevs_operational": 2, 00:21:25.670 "base_bdevs_list": [ 00:21:25.670 { 00:21:25.670 "name": "spare", 00:21:25.670 "uuid": "6e3b3455-5877-542f-aa0b-efe44242c27c", 00:21:25.670 "is_configured": true, 00:21:25.670 "data_offset": 2048, 00:21:25.670 "data_size": 63488 00:21:25.670 }, 00:21:25.670 { 00:21:25.670 "name": "BaseBdev2", 00:21:25.670 "uuid": "16408055-73ca-5dfc-85b9-65e1c286a983", 00:21:25.670 "is_configured": true, 00:21:25.670 "data_offset": 2048, 00:21:25.670 "data_size": 63488 00:21:25.670 } 00:21:25.670 ] 00:21:25.670 }' 00:21:25.670 05:40:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:25.670 05:40:29 -- common/autotest_common.sh@10 -- # set +x 00:21:26.238 05:40:30 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:26.497 [2024-10-07 05:40:30.380151] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:26.497 [2024-10-07 05:40:30.380191] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:26.497 00:21:26.497 Latency(us) 00:21:26.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.497 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:26.497 raid_bdev1 : 12.01 118.69 356.06 0.00 0.00 11734.13 283.00 122016.12 00:21:26.497 =================================================================================================================== 00:21:26.497 Total : 118.69 356.06 0.00 0.00 11734.13 283.00 122016.12 00:21:26.756 0 00:21:26.756 [2024-10-07 05:40:30.489102] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:26.756 [2024-10-07 05:40:30.489149] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:26.756 [2024-10-07 05:40:30.489240] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:26.756 [2024-10-07 05:40:30.489253] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:21:26.756 05:40:30 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.756 05:40:30 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:27.015 05:40:30 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:27.015 05:40:30 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:21:27.015 05:40:30 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:27.015 05:40:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:27.015 05:40:30 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:27.015 05:40:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:27.015 05:40:30 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:27.015 05:40:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:27.015 05:40:30 -- bdev/nbd_common.sh@12 -- # local i 00:21:27.015 05:40:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:27.015 05:40:30 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:27.015 05:40:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:27.274 /dev/nbd0 00:21:27.274 05:40:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:27.274 05:40:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:27.274 05:40:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:27.274 05:40:31 -- common/autotest_common.sh@857 -- # local i 00:21:27.274 05:40:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:27.274 05:40:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:27.274 05:40:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:27.274 05:40:31 -- common/autotest_common.sh@861 -- # break 00:21:27.274 05:40:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:27.274 05:40:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:27.274 05:40:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:27.274 1+0 records in 00:21:27.274 1+0 records out 00:21:27.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033743 s, 12.1 MB/s 00:21:27.274 05:40:31 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:27.274 05:40:31 -- common/autotest_common.sh@874 -- # size=4096 00:21:27.274 05:40:31 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:27.274 05:40:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:27.274 05:40:31 -- common/autotest_common.sh@877 -- # return 0 00:21:27.274 05:40:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:27.274 05:40:31 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:27.274 05:40:31 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:27.274 05:40:31 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:21:27.274 05:40:31 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:21:27.274 05:40:31 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:27.274 05:40:31 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:21:27.274 05:40:31 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:27.274 05:40:31 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:27.274 05:40:31 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:27.274 05:40:31 -- bdev/nbd_common.sh@12 -- # local i 00:21:27.274 05:40:31 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:27.274 05:40:31 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:27.274 05:40:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:21:27.533 /dev/nbd1 00:21:27.533 05:40:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:27.533 05:40:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:27.533 05:40:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:27.533 05:40:31 -- common/autotest_common.sh@857 -- # local i 00:21:27.533 05:40:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:27.533 05:40:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:27.533 05:40:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:27.533 05:40:31 -- common/autotest_common.sh@861 -- # break 00:21:27.533 05:40:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:27.533 05:40:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:27.533 05:40:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:27.533 1+0 records in 00:21:27.533 1+0 records out 00:21:27.533 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000591668 s, 6.9 MB/s 00:21:27.533 05:40:31 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:27.533 05:40:31 -- common/autotest_common.sh@874 -- # size=4096 00:21:27.533 05:40:31 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:27.533 05:40:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:27.533 05:40:31 -- common/autotest_common.sh@877 -- # return 0 00:21:27.533 05:40:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:27.533 05:40:31 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:27.533 05:40:31 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:27.792 05:40:31 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:27.792 05:40:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:27.792 05:40:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:27.792 05:40:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:27.792 05:40:31 -- bdev/nbd_common.sh@51 -- # local i 00:21:27.792 05:40:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:27.792 05:40:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:28.051 05:40:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:28.051 05:40:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:28.051 05:40:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:28.051 05:40:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:28.051 05:40:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:28.051 05:40:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:28.051 05:40:31 -- bdev/nbd_common.sh@41 -- # break 00:21:28.051 05:40:31 -- bdev/nbd_common.sh@45 -- # return 0 00:21:28.051 05:40:31 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:28.051 05:40:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:28.051 05:40:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:28.051 05:40:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:28.051 05:40:31 -- bdev/nbd_common.sh@51 -- # local i 00:21:28.051 05:40:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:28.051 05:40:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:28.310 05:40:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:28.310 05:40:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:28.310 05:40:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:28.310 05:40:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:28.310 05:40:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:28.310 05:40:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:28.310 05:40:32 -- bdev/nbd_common.sh@41 -- # break 00:21:28.310 05:40:32 -- bdev/nbd_common.sh@45 -- # return 0 00:21:28.310 05:40:32 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:28.310 05:40:32 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:28.310 05:40:32 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:28.310 05:40:32 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:28.310 05:40:32 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:28.569 [2024-10-07 05:40:32.479947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:28.569 [2024-10-07 05:40:32.480040] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.569 [2024-10-07 05:40:32.480080] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:28.569 [2024-10-07 05:40:32.480109] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.569 [2024-10-07 05:40:32.481976] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.569 [2024-10-07 05:40:32.482050] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:28.569 [2024-10-07 05:40:32.482161] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:28.569 [2024-10-07 05:40:32.482219] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:28.569 BaseBdev1 00:21:28.569 05:40:32 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:28.569 05:40:32 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:21:28.569 05:40:32 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:21:28.828 05:40:32 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:29.086 [2024-10-07 05:40:32.898117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:29.086 [2024-10-07 05:40:32.898186] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:29.087 [2024-10-07 05:40:32.898220] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:29.087 [2024-10-07 05:40:32.898254] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:29.087 [2024-10-07 05:40:32.898669] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:29.087 [2024-10-07 05:40:32.898734] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:29.087 [2024-10-07 05:40:32.898843] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:21:29.087 [2024-10-07 05:40:32.898860] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:21:29.087 [2024-10-07 05:40:32.898868] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:29.087 [2024-10-07 05:40:32.898893] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state configuring 00:21:29.087 [2024-10-07 05:40:32.898964] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:29.087 BaseBdev2 00:21:29.087 05:40:32 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:29.345 05:40:33 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:29.604 [2024-10-07 05:40:33.342253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:29.604 [2024-10-07 05:40:33.342320] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:29.604 [2024-10-07 05:40:33.342359] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:29.604 [2024-10-07 05:40:33.342382] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:29.604 [2024-10-07 05:40:33.342793] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:29.604 [2024-10-07 05:40:33.342861] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:29.604 [2024-10-07 05:40:33.342980] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:29.604 [2024-10-07 05:40:33.343007] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:29.604 spare 00:21:29.604 05:40:33 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:29.604 05:40:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:29.604 05:40:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:29.604 05:40:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:29.604 05:40:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:29.604 05:40:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:29.604 05:40:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:29.604 05:40:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:29.604 05:40:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:29.604 05:40:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:29.604 05:40:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.604 05:40:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.604 [2024-10-07 05:40:33.443099] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:21:29.604 [2024-10-07 05:40:33.443122] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:29.604 [2024-10-07 05:40:33.443235] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:21:29.604 [2024-10-07 05:40:33.443615] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:21:29.604 [2024-10-07 05:40:33.443649] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:21:29.604 [2024-10-07 05:40:33.443768] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:29.604 05:40:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:29.604 "name": "raid_bdev1", 00:21:29.604 "uuid": "1868e39e-b3e6-49c3-864a-92d897c62457", 00:21:29.604 "strip_size_kb": 0, 00:21:29.604 "state": "online", 00:21:29.604 "raid_level": "raid1", 00:21:29.604 "superblock": true, 00:21:29.604 "num_base_bdevs": 2, 00:21:29.604 "num_base_bdevs_discovered": 2, 00:21:29.604 "num_base_bdevs_operational": 2, 00:21:29.604 "base_bdevs_list": [ 00:21:29.604 { 00:21:29.604 "name": "spare", 00:21:29.604 "uuid": "6e3b3455-5877-542f-aa0b-efe44242c27c", 00:21:29.604 "is_configured": true, 00:21:29.604 "data_offset": 2048, 00:21:29.604 "data_size": 63488 00:21:29.604 }, 00:21:29.604 { 00:21:29.604 "name": "BaseBdev2", 00:21:29.604 "uuid": "16408055-73ca-5dfc-85b9-65e1c286a983", 00:21:29.604 "is_configured": true, 00:21:29.604 "data_offset": 2048, 00:21:29.604 "data_size": 63488 00:21:29.604 } 00:21:29.604 ] 00:21:29.604 }' 00:21:29.604 05:40:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:29.604 05:40:33 -- common/autotest_common.sh@10 -- # set +x 00:21:30.171 05:40:34 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:30.171 05:40:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:30.171 05:40:34 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:30.171 05:40:34 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:30.171 05:40:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:30.171 05:40:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.171 05:40:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.430 05:40:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:30.430 "name": "raid_bdev1", 00:21:30.430 "uuid": "1868e39e-b3e6-49c3-864a-92d897c62457", 00:21:30.430 "strip_size_kb": 0, 00:21:30.430 "state": "online", 00:21:30.430 "raid_level": "raid1", 00:21:30.430 "superblock": true, 00:21:30.430 "num_base_bdevs": 2, 00:21:30.430 "num_base_bdevs_discovered": 2, 00:21:30.430 "num_base_bdevs_operational": 2, 00:21:30.430 "base_bdevs_list": [ 00:21:30.430 { 00:21:30.430 "name": "spare", 00:21:30.430 "uuid": "6e3b3455-5877-542f-aa0b-efe44242c27c", 00:21:30.430 "is_configured": true, 00:21:30.430 "data_offset": 2048, 00:21:30.430 "data_size": 63488 00:21:30.430 }, 00:21:30.430 { 00:21:30.430 "name": "BaseBdev2", 00:21:30.430 "uuid": "16408055-73ca-5dfc-85b9-65e1c286a983", 00:21:30.430 "is_configured": true, 00:21:30.430 "data_offset": 2048, 00:21:30.430 "data_size": 63488 00:21:30.430 } 00:21:30.430 ] 00:21:30.430 }' 00:21:30.430 05:40:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:30.430 05:40:34 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:30.430 05:40:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:30.430 05:40:34 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:30.430 05:40:34 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.430 05:40:34 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:30.689 05:40:34 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:21:30.689 05:40:34 -- bdev/bdev_raid.sh@709 -- # killprocess 167765 00:21:30.689 05:40:34 -- common/autotest_common.sh@926 -- # '[' -z 167765 ']' 00:21:30.689 05:40:34 -- common/autotest_common.sh@930 -- # kill -0 167765 00:21:30.689 05:40:34 -- common/autotest_common.sh@931 -- # uname 00:21:30.689 05:40:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:30.689 05:40:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 167765 00:21:30.689 05:40:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:30.689 05:40:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:30.689 05:40:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 167765' 00:21:30.689 killing process with pid 167765 00:21:30.689 05:40:34 -- common/autotest_common.sh@945 -- # kill 167765 00:21:30.689 Received shutdown signal, test time was about 16.158738 seconds 00:21:30.689 00:21:30.689 Latency(us) 00:21:30.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.689 =================================================================================================================== 00:21:30.689 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:30.689 [2024-10-07 05:40:34.616029] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:30.689 [2024-10-07 05:40:34.616105] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:30.689 [2024-10-07 05:40:34.616161] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:30.689 [2024-10-07 05:40:34.616180] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:21:30.689 05:40:34 -- common/autotest_common.sh@950 -- # wait 167765 00:21:30.948 [2024-10-07 05:40:34.764316] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:31.883 ************************************ 00:21:31.883 END TEST raid_rebuild_test_sb_io 00:21:31.883 ************************************ 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:31.883 00:21:31.883 real 0m21.447s 00:21:31.883 user 0m33.989s 00:21:31.883 sys 0m2.476s 00:21:31.883 05:40:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:31.883 05:40:35 -- common/autotest_common.sh@10 -- # set +x 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false 00:21:31.883 05:40:35 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:31.883 05:40:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:31.883 05:40:35 -- common/autotest_common.sh@10 -- # set +x 00:21:31.883 ************************************ 00:21:31.883 START TEST raid_rebuild_test 00:21:31.883 ************************************ 00:21:31.883 05:40:35 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false false 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@544 -- # raid_pid=168331 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@545 -- # waitforlisten 168331 /var/tmp/spdk-raid.sock 00:21:31.883 05:40:35 -- common/autotest_common.sh@819 -- # '[' -z 168331 ']' 00:21:31.883 05:40:35 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:31.884 05:40:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:31.884 05:40:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:31.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:31.884 05:40:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:31.884 05:40:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:31.884 05:40:35 -- common/autotest_common.sh@10 -- # set +x 00:21:31.884 [2024-10-07 05:40:35.854633] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:21:31.884 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:31.884 Zero copy mechanism will not be used. 00:21:31.884 [2024-10-07 05:40:35.854794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168331 ] 00:21:32.143 [2024-10-07 05:40:36.007360] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.402 [2024-10-07 05:40:36.163997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.402 [2024-10-07 05:40:36.328323] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:32.969 05:40:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:32.969 05:40:36 -- common/autotest_common.sh@852 -- # return 0 00:21:32.969 05:40:36 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:32.969 05:40:36 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:32.969 05:40:36 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:33.228 BaseBdev1 00:21:33.228 05:40:37 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:33.228 05:40:37 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:33.228 05:40:37 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:33.486 BaseBdev2 00:21:33.486 05:40:37 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:33.486 05:40:37 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:33.486 05:40:37 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:33.745 BaseBdev3 00:21:33.745 05:40:37 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:33.745 05:40:37 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:33.745 05:40:37 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:34.004 BaseBdev4 00:21:34.004 05:40:37 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:34.004 spare_malloc 00:21:34.004 05:40:37 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:34.263 spare_delay 00:21:34.263 05:40:38 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:34.520 [2024-10-07 05:40:38.303852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:34.520 [2024-10-07 05:40:38.303945] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:34.520 [2024-10-07 05:40:38.303983] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:21:34.520 [2024-10-07 05:40:38.304032] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:34.520 [2024-10-07 05:40:38.306196] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:34.520 [2024-10-07 05:40:38.306250] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:34.520 spare 00:21:34.520 05:40:38 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:34.778 [2024-10-07 05:40:38.547945] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:34.778 [2024-10-07 05:40:38.549725] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:34.778 [2024-10-07 05:40:38.549784] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:34.778 [2024-10-07 05:40:38.549826] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:34.778 [2024-10-07 05:40:38.549901] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:21:34.778 [2024-10-07 05:40:38.549914] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:34.778 [2024-10-07 05:40:38.550044] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:21:34.778 [2024-10-07 05:40:38.550397] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:21:34.778 [2024-10-07 05:40:38.550422] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:21:34.778 [2024-10-07 05:40:38.550605] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:34.778 05:40:38 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:34.778 05:40:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:34.778 05:40:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:34.778 05:40:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:34.778 05:40:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:34.778 05:40:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:34.778 05:40:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:34.778 05:40:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:34.778 05:40:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:34.778 05:40:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:34.778 05:40:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.778 05:40:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.778 05:40:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:34.778 "name": "raid_bdev1", 00:21:34.778 "uuid": "03eef475-8b29-41d7-ac0d-c0230ae50de8", 00:21:34.778 "strip_size_kb": 0, 00:21:34.778 "state": "online", 00:21:34.778 "raid_level": "raid1", 00:21:34.778 "superblock": false, 00:21:34.778 "num_base_bdevs": 4, 00:21:34.778 "num_base_bdevs_discovered": 4, 00:21:34.778 "num_base_bdevs_operational": 4, 00:21:34.778 "base_bdevs_list": [ 00:21:34.778 { 00:21:34.778 "name": "BaseBdev1", 00:21:34.778 "uuid": "f3836540-3559-4fe0-94a0-449575d4ab86", 00:21:34.778 "is_configured": true, 00:21:34.778 "data_offset": 0, 00:21:34.778 "data_size": 65536 00:21:34.778 }, 00:21:34.778 { 00:21:34.778 "name": "BaseBdev2", 00:21:34.778 "uuid": "f0e6e1c8-1a68-476d-a84c-353c41ce9e69", 00:21:34.778 "is_configured": true, 00:21:34.778 "data_offset": 0, 00:21:34.778 "data_size": 65536 00:21:34.778 }, 00:21:34.778 { 00:21:34.778 "name": "BaseBdev3", 00:21:34.778 "uuid": "b841b281-0a4d-44ce-8b17-1dc7222240f3", 00:21:34.778 "is_configured": true, 00:21:34.778 "data_offset": 0, 00:21:34.778 "data_size": 65536 00:21:34.778 }, 00:21:34.778 { 00:21:34.778 "name": "BaseBdev4", 00:21:34.778 "uuid": "f61cf5f3-5d88-4f3a-9665-ad4fa9a60968", 00:21:34.778 "is_configured": true, 00:21:34.779 "data_offset": 0, 00:21:34.779 "data_size": 65536 00:21:34.779 } 00:21:34.779 ] 00:21:34.779 }' 00:21:34.779 05:40:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:34.779 05:40:38 -- common/autotest_common.sh@10 -- # set +x 00:21:35.744 05:40:39 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:35.744 05:40:39 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:35.744 [2024-10-07 05:40:39.592282] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:35.744 05:40:39 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:21:35.744 05:40:39 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.744 05:40:39 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:36.003 05:40:39 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:21:36.003 05:40:39 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:21:36.003 05:40:39 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:21:36.003 05:40:39 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:36.003 05:40:39 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:36.003 05:40:39 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:36.003 05:40:39 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:36.003 05:40:39 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:36.003 05:40:39 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:36.003 05:40:39 -- bdev/nbd_common.sh@12 -- # local i 00:21:36.003 05:40:39 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:36.003 05:40:39 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:36.003 05:40:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:36.003 [2024-10-07 05:40:39.964115] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:21:36.261 /dev/nbd0 00:21:36.262 05:40:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:36.262 05:40:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:36.262 05:40:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:36.262 05:40:40 -- common/autotest_common.sh@857 -- # local i 00:21:36.262 05:40:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:36.262 05:40:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:36.262 05:40:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:36.262 05:40:40 -- common/autotest_common.sh@861 -- # break 00:21:36.262 05:40:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:36.262 05:40:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:36.262 05:40:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:36.262 1+0 records in 00:21:36.262 1+0 records out 00:21:36.262 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273428 s, 15.0 MB/s 00:21:36.262 05:40:40 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:36.262 05:40:40 -- common/autotest_common.sh@874 -- # size=4096 00:21:36.262 05:40:40 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:36.262 05:40:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:36.262 05:40:40 -- common/autotest_common.sh@877 -- # return 0 00:21:36.262 05:40:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:36.262 05:40:40 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:36.262 05:40:40 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:21:36.262 05:40:40 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:21:36.262 05:40:40 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:21:41.533 65536+0 records in 00:21:41.533 65536+0 records out 00:21:41.533 33554432 bytes (34 MB, 32 MiB) copied, 5.32256 s, 6.3 MB/s 00:21:41.533 05:40:45 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:41.533 05:40:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:41.533 05:40:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:41.533 05:40:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:41.533 05:40:45 -- bdev/nbd_common.sh@51 -- # local i 00:21:41.533 05:40:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:41.533 05:40:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:41.791 [2024-10-07 05:40:45.587270] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:41.791 05:40:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:41.791 05:40:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:41.791 05:40:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:41.791 05:40:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:41.791 05:40:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:41.791 05:40:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:41.792 05:40:45 -- bdev/nbd_common.sh@41 -- # break 00:21:41.792 05:40:45 -- bdev/nbd_common.sh@45 -- # return 0 00:21:41.792 05:40:45 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:42.050 [2024-10-07 05:40:45.826835] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:42.050 05:40:45 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:42.050 05:40:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:42.050 05:40:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:42.050 05:40:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:42.050 05:40:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:42.050 05:40:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:42.050 05:40:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:42.050 05:40:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:42.050 05:40:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:42.050 05:40:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:42.050 05:40:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.050 05:40:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.309 05:40:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:42.309 "name": "raid_bdev1", 00:21:42.309 "uuid": "03eef475-8b29-41d7-ac0d-c0230ae50de8", 00:21:42.309 "strip_size_kb": 0, 00:21:42.309 "state": "online", 00:21:42.309 "raid_level": "raid1", 00:21:42.309 "superblock": false, 00:21:42.309 "num_base_bdevs": 4, 00:21:42.309 "num_base_bdevs_discovered": 3, 00:21:42.309 "num_base_bdevs_operational": 3, 00:21:42.309 "base_bdevs_list": [ 00:21:42.309 { 00:21:42.309 "name": null, 00:21:42.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.309 "is_configured": false, 00:21:42.309 "data_offset": 0, 00:21:42.309 "data_size": 65536 00:21:42.309 }, 00:21:42.309 { 00:21:42.309 "name": "BaseBdev2", 00:21:42.309 "uuid": "f0e6e1c8-1a68-476d-a84c-353c41ce9e69", 00:21:42.309 "is_configured": true, 00:21:42.309 "data_offset": 0, 00:21:42.309 "data_size": 65536 00:21:42.309 }, 00:21:42.309 { 00:21:42.309 "name": "BaseBdev3", 00:21:42.309 "uuid": "b841b281-0a4d-44ce-8b17-1dc7222240f3", 00:21:42.309 "is_configured": true, 00:21:42.309 "data_offset": 0, 00:21:42.309 "data_size": 65536 00:21:42.309 }, 00:21:42.309 { 00:21:42.309 "name": "BaseBdev4", 00:21:42.309 "uuid": "f61cf5f3-5d88-4f3a-9665-ad4fa9a60968", 00:21:42.309 "is_configured": true, 00:21:42.309 "data_offset": 0, 00:21:42.309 "data_size": 65536 00:21:42.309 } 00:21:42.310 ] 00:21:42.310 }' 00:21:42.310 05:40:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:42.310 05:40:46 -- common/autotest_common.sh@10 -- # set +x 00:21:42.877 05:40:46 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:43.135 [2024-10-07 05:40:46.867127] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:43.135 [2024-10-07 05:40:46.867175] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:43.135 [2024-10-07 05:40:46.878035] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d096f0 00:21:43.135 [2024-10-07 05:40:46.880210] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:43.135 05:40:46 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:44.070 05:40:47 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:44.070 05:40:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:44.070 05:40:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:44.070 05:40:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:44.070 05:40:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:44.070 05:40:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.070 05:40:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.329 05:40:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:44.329 "name": "raid_bdev1", 00:21:44.329 "uuid": "03eef475-8b29-41d7-ac0d-c0230ae50de8", 00:21:44.329 "strip_size_kb": 0, 00:21:44.329 "state": "online", 00:21:44.329 "raid_level": "raid1", 00:21:44.329 "superblock": false, 00:21:44.329 "num_base_bdevs": 4, 00:21:44.329 "num_base_bdevs_discovered": 4, 00:21:44.329 "num_base_bdevs_operational": 4, 00:21:44.329 "process": { 00:21:44.329 "type": "rebuild", 00:21:44.329 "target": "spare", 00:21:44.329 "progress": { 00:21:44.329 "blocks": 24576, 00:21:44.329 "percent": 37 00:21:44.329 } 00:21:44.329 }, 00:21:44.329 "base_bdevs_list": [ 00:21:44.329 { 00:21:44.329 "name": "spare", 00:21:44.329 "uuid": "4f7430f4-0884-54d7-ae81-0019e0c3e84d", 00:21:44.329 "is_configured": true, 00:21:44.329 "data_offset": 0, 00:21:44.329 "data_size": 65536 00:21:44.329 }, 00:21:44.329 { 00:21:44.329 "name": "BaseBdev2", 00:21:44.329 "uuid": "f0e6e1c8-1a68-476d-a84c-353c41ce9e69", 00:21:44.329 "is_configured": true, 00:21:44.329 "data_offset": 0, 00:21:44.329 "data_size": 65536 00:21:44.329 }, 00:21:44.329 { 00:21:44.329 "name": "BaseBdev3", 00:21:44.329 "uuid": "b841b281-0a4d-44ce-8b17-1dc7222240f3", 00:21:44.329 "is_configured": true, 00:21:44.329 "data_offset": 0, 00:21:44.329 "data_size": 65536 00:21:44.329 }, 00:21:44.329 { 00:21:44.329 "name": "BaseBdev4", 00:21:44.329 "uuid": "f61cf5f3-5d88-4f3a-9665-ad4fa9a60968", 00:21:44.329 "is_configured": true, 00:21:44.329 "data_offset": 0, 00:21:44.329 "data_size": 65536 00:21:44.329 } 00:21:44.329 ] 00:21:44.329 }' 00:21:44.329 05:40:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:44.329 05:40:48 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:44.329 05:40:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:44.329 05:40:48 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:44.329 05:40:48 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:44.586 [2024-10-07 05:40:48.462763] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:44.586 [2024-10-07 05:40:48.490743] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:44.586 [2024-10-07 05:40:48.490919] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:44.586 05:40:48 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:44.586 05:40:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:44.586 05:40:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:44.586 05:40:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:44.587 05:40:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:44.587 05:40:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:44.587 05:40:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:44.587 05:40:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:44.587 05:40:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:44.587 05:40:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:44.587 05:40:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.587 05:40:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.844 05:40:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:44.844 "name": "raid_bdev1", 00:21:44.844 "uuid": "03eef475-8b29-41d7-ac0d-c0230ae50de8", 00:21:44.844 "strip_size_kb": 0, 00:21:44.844 "state": "online", 00:21:44.844 "raid_level": "raid1", 00:21:44.844 "superblock": false, 00:21:44.844 "num_base_bdevs": 4, 00:21:44.844 "num_base_bdevs_discovered": 3, 00:21:44.844 "num_base_bdevs_operational": 3, 00:21:44.844 "base_bdevs_list": [ 00:21:44.844 { 00:21:44.844 "name": null, 00:21:44.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.844 "is_configured": false, 00:21:44.844 "data_offset": 0, 00:21:44.844 "data_size": 65536 00:21:44.844 }, 00:21:44.844 { 00:21:44.844 "name": "BaseBdev2", 00:21:44.844 "uuid": "f0e6e1c8-1a68-476d-a84c-353c41ce9e69", 00:21:44.844 "is_configured": true, 00:21:44.844 "data_offset": 0, 00:21:44.844 "data_size": 65536 00:21:44.844 }, 00:21:44.844 { 00:21:44.844 "name": "BaseBdev3", 00:21:44.844 "uuid": "b841b281-0a4d-44ce-8b17-1dc7222240f3", 00:21:44.844 "is_configured": true, 00:21:44.844 "data_offset": 0, 00:21:44.844 "data_size": 65536 00:21:44.844 }, 00:21:44.844 { 00:21:44.844 "name": "BaseBdev4", 00:21:44.844 "uuid": "f61cf5f3-5d88-4f3a-9665-ad4fa9a60968", 00:21:44.845 "is_configured": true, 00:21:44.845 "data_offset": 0, 00:21:44.845 "data_size": 65536 00:21:44.845 } 00:21:44.845 ] 00:21:44.845 }' 00:21:44.845 05:40:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:44.845 05:40:48 -- common/autotest_common.sh@10 -- # set +x 00:21:45.780 05:40:49 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:45.780 05:40:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:45.780 05:40:49 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:45.780 05:40:49 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:45.780 05:40:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:45.780 05:40:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.780 05:40:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.780 05:40:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:45.780 "name": "raid_bdev1", 00:21:45.780 "uuid": "03eef475-8b29-41d7-ac0d-c0230ae50de8", 00:21:45.780 "strip_size_kb": 0, 00:21:45.780 "state": "online", 00:21:45.780 "raid_level": "raid1", 00:21:45.780 "superblock": false, 00:21:45.780 "num_base_bdevs": 4, 00:21:45.780 "num_base_bdevs_discovered": 3, 00:21:45.780 "num_base_bdevs_operational": 3, 00:21:45.780 "base_bdevs_list": [ 00:21:45.780 { 00:21:45.780 "name": null, 00:21:45.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.780 "is_configured": false, 00:21:45.780 "data_offset": 0, 00:21:45.780 "data_size": 65536 00:21:45.780 }, 00:21:45.780 { 00:21:45.780 "name": "BaseBdev2", 00:21:45.780 "uuid": "f0e6e1c8-1a68-476d-a84c-353c41ce9e69", 00:21:45.780 "is_configured": true, 00:21:45.780 "data_offset": 0, 00:21:45.780 "data_size": 65536 00:21:45.780 }, 00:21:45.780 { 00:21:45.780 "name": "BaseBdev3", 00:21:45.780 "uuid": "b841b281-0a4d-44ce-8b17-1dc7222240f3", 00:21:45.780 "is_configured": true, 00:21:45.780 "data_offset": 0, 00:21:45.780 "data_size": 65536 00:21:45.780 }, 00:21:45.780 { 00:21:45.780 "name": "BaseBdev4", 00:21:45.780 "uuid": "f61cf5f3-5d88-4f3a-9665-ad4fa9a60968", 00:21:45.780 "is_configured": true, 00:21:45.780 "data_offset": 0, 00:21:45.780 "data_size": 65536 00:21:45.780 } 00:21:45.780 ] 00:21:45.780 }' 00:21:45.780 05:40:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:45.780 05:40:49 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:45.780 05:40:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:46.039 05:40:49 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:46.039 05:40:49 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:46.039 [2024-10-07 05:40:50.010585] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:46.039 [2024-10-07 05:40:50.010630] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:46.297 [2024-10-07 05:40:50.021889] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09890 00:21:46.297 [2024-10-07 05:40:50.024208] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:46.297 05:40:50 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:47.232 05:40:51 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:47.232 05:40:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:47.232 05:40:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:47.232 05:40:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:47.232 05:40:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:47.232 05:40:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.232 05:40:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.492 05:40:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:47.492 "name": "raid_bdev1", 00:21:47.492 "uuid": "03eef475-8b29-41d7-ac0d-c0230ae50de8", 00:21:47.492 "strip_size_kb": 0, 00:21:47.492 "state": "online", 00:21:47.492 "raid_level": "raid1", 00:21:47.492 "superblock": false, 00:21:47.492 "num_base_bdevs": 4, 00:21:47.492 "num_base_bdevs_discovered": 4, 00:21:47.492 "num_base_bdevs_operational": 4, 00:21:47.492 "process": { 00:21:47.492 "type": "rebuild", 00:21:47.492 "target": "spare", 00:21:47.492 "progress": { 00:21:47.492 "blocks": 24576, 00:21:47.492 "percent": 37 00:21:47.492 } 00:21:47.492 }, 00:21:47.492 "base_bdevs_list": [ 00:21:47.492 { 00:21:47.492 "name": "spare", 00:21:47.492 "uuid": "4f7430f4-0884-54d7-ae81-0019e0c3e84d", 00:21:47.492 "is_configured": true, 00:21:47.492 "data_offset": 0, 00:21:47.492 "data_size": 65536 00:21:47.492 }, 00:21:47.492 { 00:21:47.492 "name": "BaseBdev2", 00:21:47.492 "uuid": "f0e6e1c8-1a68-476d-a84c-353c41ce9e69", 00:21:47.492 "is_configured": true, 00:21:47.492 "data_offset": 0, 00:21:47.492 "data_size": 65536 00:21:47.492 }, 00:21:47.492 { 00:21:47.492 "name": "BaseBdev3", 00:21:47.492 "uuid": "b841b281-0a4d-44ce-8b17-1dc7222240f3", 00:21:47.492 "is_configured": true, 00:21:47.492 "data_offset": 0, 00:21:47.492 "data_size": 65536 00:21:47.492 }, 00:21:47.492 { 00:21:47.492 "name": "BaseBdev4", 00:21:47.492 "uuid": "f61cf5f3-5d88-4f3a-9665-ad4fa9a60968", 00:21:47.492 "is_configured": true, 00:21:47.492 "data_offset": 0, 00:21:47.492 "data_size": 65536 00:21:47.492 } 00:21:47.492 ] 00:21:47.492 }' 00:21:47.492 05:40:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:47.492 05:40:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:47.492 05:40:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:47.492 05:40:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:47.492 05:40:51 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:21:47.492 05:40:51 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:21:47.492 05:40:51 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:47.492 05:40:51 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:21:47.492 05:40:51 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:47.751 [2024-10-07 05:40:51.605953] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:47.751 [2024-10-07 05:40:51.634426] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09890 00:21:47.751 05:40:51 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:21:47.751 05:40:51 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:21:47.751 05:40:51 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:47.751 05:40:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:47.751 05:40:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:47.751 05:40:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:47.751 05:40:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:47.751 05:40:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.751 05:40:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.009 05:40:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:48.009 "name": "raid_bdev1", 00:21:48.009 "uuid": "03eef475-8b29-41d7-ac0d-c0230ae50de8", 00:21:48.009 "strip_size_kb": 0, 00:21:48.009 "state": "online", 00:21:48.009 "raid_level": "raid1", 00:21:48.009 "superblock": false, 00:21:48.009 "num_base_bdevs": 4, 00:21:48.009 "num_base_bdevs_discovered": 3, 00:21:48.009 "num_base_bdevs_operational": 3, 00:21:48.009 "process": { 00:21:48.009 "type": "rebuild", 00:21:48.009 "target": "spare", 00:21:48.009 "progress": { 00:21:48.009 "blocks": 36864, 00:21:48.009 "percent": 56 00:21:48.009 } 00:21:48.009 }, 00:21:48.009 "base_bdevs_list": [ 00:21:48.009 { 00:21:48.009 "name": "spare", 00:21:48.009 "uuid": "4f7430f4-0884-54d7-ae81-0019e0c3e84d", 00:21:48.009 "is_configured": true, 00:21:48.009 "data_offset": 0, 00:21:48.009 "data_size": 65536 00:21:48.009 }, 00:21:48.009 { 00:21:48.009 "name": null, 00:21:48.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.009 "is_configured": false, 00:21:48.009 "data_offset": 0, 00:21:48.009 "data_size": 65536 00:21:48.009 }, 00:21:48.009 { 00:21:48.009 "name": "BaseBdev3", 00:21:48.009 "uuid": "b841b281-0a4d-44ce-8b17-1dc7222240f3", 00:21:48.009 "is_configured": true, 00:21:48.009 "data_offset": 0, 00:21:48.009 "data_size": 65536 00:21:48.009 }, 00:21:48.009 { 00:21:48.009 "name": "BaseBdev4", 00:21:48.009 "uuid": "f61cf5f3-5d88-4f3a-9665-ad4fa9a60968", 00:21:48.009 "is_configured": true, 00:21:48.009 "data_offset": 0, 00:21:48.009 "data_size": 65536 00:21:48.009 } 00:21:48.009 ] 00:21:48.009 }' 00:21:48.009 05:40:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:48.009 05:40:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:48.009 05:40:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:48.009 05:40:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:48.009 05:40:51 -- bdev/bdev_raid.sh@657 -- # local timeout=491 00:21:48.009 05:40:51 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:48.009 05:40:51 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:48.009 05:40:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:48.009 05:40:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:48.009 05:40:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:48.009 05:40:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:48.009 05:40:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.009 05:40:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.268 05:40:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:48.268 "name": "raid_bdev1", 00:21:48.268 "uuid": "03eef475-8b29-41d7-ac0d-c0230ae50de8", 00:21:48.268 "strip_size_kb": 0, 00:21:48.268 "state": "online", 00:21:48.268 "raid_level": "raid1", 00:21:48.268 "superblock": false, 00:21:48.268 "num_base_bdevs": 4, 00:21:48.268 "num_base_bdevs_discovered": 3, 00:21:48.268 "num_base_bdevs_operational": 3, 00:21:48.268 "process": { 00:21:48.268 "type": "rebuild", 00:21:48.268 "target": "spare", 00:21:48.268 "progress": { 00:21:48.268 "blocks": 43008, 00:21:48.268 "percent": 65 00:21:48.268 } 00:21:48.268 }, 00:21:48.268 "base_bdevs_list": [ 00:21:48.268 { 00:21:48.268 "name": "spare", 00:21:48.268 "uuid": "4f7430f4-0884-54d7-ae81-0019e0c3e84d", 00:21:48.268 "is_configured": true, 00:21:48.268 "data_offset": 0, 00:21:48.268 "data_size": 65536 00:21:48.268 }, 00:21:48.268 { 00:21:48.268 "name": null, 00:21:48.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.268 "is_configured": false, 00:21:48.268 "data_offset": 0, 00:21:48.268 "data_size": 65536 00:21:48.268 }, 00:21:48.268 { 00:21:48.268 "name": "BaseBdev3", 00:21:48.268 "uuid": "b841b281-0a4d-44ce-8b17-1dc7222240f3", 00:21:48.268 "is_configured": true, 00:21:48.268 "data_offset": 0, 00:21:48.268 "data_size": 65536 00:21:48.268 }, 00:21:48.268 { 00:21:48.268 "name": "BaseBdev4", 00:21:48.268 "uuid": "f61cf5f3-5d88-4f3a-9665-ad4fa9a60968", 00:21:48.268 "is_configured": true, 00:21:48.268 "data_offset": 0, 00:21:48.268 "data_size": 65536 00:21:48.268 } 00:21:48.268 ] 00:21:48.268 }' 00:21:48.268 05:40:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:48.268 05:40:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:48.268 05:40:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:48.526 05:40:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:48.526 05:40:52 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:49.462 [2024-10-07 05:40:53.244800] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:49.462 [2024-10-07 05:40:53.244875] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:49.462 [2024-10-07 05:40:53.244952] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:49.462 05:40:53 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:49.462 05:40:53 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:49.462 05:40:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:49.462 05:40:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:49.462 05:40:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:49.462 05:40:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:49.462 05:40:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.462 05:40:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.721 05:40:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:49.721 "name": "raid_bdev1", 00:21:49.721 "uuid": "03eef475-8b29-41d7-ac0d-c0230ae50de8", 00:21:49.721 "strip_size_kb": 0, 00:21:49.721 "state": "online", 00:21:49.721 "raid_level": "raid1", 00:21:49.721 "superblock": false, 00:21:49.721 "num_base_bdevs": 4, 00:21:49.721 "num_base_bdevs_discovered": 3, 00:21:49.721 "num_base_bdevs_operational": 3, 00:21:49.721 "base_bdevs_list": [ 00:21:49.721 { 00:21:49.721 "name": "spare", 00:21:49.721 "uuid": "4f7430f4-0884-54d7-ae81-0019e0c3e84d", 00:21:49.721 "is_configured": true, 00:21:49.721 "data_offset": 0, 00:21:49.721 "data_size": 65536 00:21:49.721 }, 00:21:49.721 { 00:21:49.721 "name": null, 00:21:49.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.721 "is_configured": false, 00:21:49.721 "data_offset": 0, 00:21:49.721 "data_size": 65536 00:21:49.721 }, 00:21:49.721 { 00:21:49.721 "name": "BaseBdev3", 00:21:49.721 "uuid": "b841b281-0a4d-44ce-8b17-1dc7222240f3", 00:21:49.721 "is_configured": true, 00:21:49.721 "data_offset": 0, 00:21:49.721 "data_size": 65536 00:21:49.721 }, 00:21:49.721 { 00:21:49.721 "name": "BaseBdev4", 00:21:49.721 "uuid": "f61cf5f3-5d88-4f3a-9665-ad4fa9a60968", 00:21:49.721 "is_configured": true, 00:21:49.721 "data_offset": 0, 00:21:49.721 "data_size": 65536 00:21:49.721 } 00:21:49.721 ] 00:21:49.721 }' 00:21:49.721 05:40:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:49.721 05:40:53 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:49.721 05:40:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:49.721 05:40:53 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:49.721 05:40:53 -- bdev/bdev_raid.sh@660 -- # break 00:21:49.721 05:40:53 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:49.721 05:40:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:49.721 05:40:53 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:49.721 05:40:53 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:49.721 05:40:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:49.721 05:40:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.721 05:40:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.979 05:40:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:49.979 "name": "raid_bdev1", 00:21:49.979 "uuid": "03eef475-8b29-41d7-ac0d-c0230ae50de8", 00:21:49.979 "strip_size_kb": 0, 00:21:49.979 "state": "online", 00:21:49.979 "raid_level": "raid1", 00:21:49.979 "superblock": false, 00:21:49.979 "num_base_bdevs": 4, 00:21:49.979 "num_base_bdevs_discovered": 3, 00:21:49.979 "num_base_bdevs_operational": 3, 00:21:49.979 "base_bdevs_list": [ 00:21:49.979 { 00:21:49.979 "name": "spare", 00:21:49.979 "uuid": "4f7430f4-0884-54d7-ae81-0019e0c3e84d", 00:21:49.979 "is_configured": true, 00:21:49.979 "data_offset": 0, 00:21:49.979 "data_size": 65536 00:21:49.979 }, 00:21:49.979 { 00:21:49.979 "name": null, 00:21:49.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.979 "is_configured": false, 00:21:49.979 "data_offset": 0, 00:21:49.979 "data_size": 65536 00:21:49.979 }, 00:21:49.979 { 00:21:49.979 "name": "BaseBdev3", 00:21:49.979 "uuid": "b841b281-0a4d-44ce-8b17-1dc7222240f3", 00:21:49.979 "is_configured": true, 00:21:49.979 "data_offset": 0, 00:21:49.979 "data_size": 65536 00:21:49.979 }, 00:21:49.979 { 00:21:49.979 "name": "BaseBdev4", 00:21:49.979 "uuid": "f61cf5f3-5d88-4f3a-9665-ad4fa9a60968", 00:21:49.979 "is_configured": true, 00:21:49.979 "data_offset": 0, 00:21:49.979 "data_size": 65536 00:21:49.979 } 00:21:49.979 ] 00:21:49.979 }' 00:21:49.979 05:40:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:49.979 05:40:53 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:49.979 05:40:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:49.979 05:40:53 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:49.979 05:40:53 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:49.979 05:40:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:49.979 05:40:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:49.979 05:40:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:49.979 05:40:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:49.979 05:40:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:49.979 05:40:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:49.979 05:40:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:49.979 05:40:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:49.979 05:40:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:49.979 05:40:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.979 05:40:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.237 05:40:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:50.237 "name": "raid_bdev1", 00:21:50.237 "uuid": "03eef475-8b29-41d7-ac0d-c0230ae50de8", 00:21:50.237 "strip_size_kb": 0, 00:21:50.237 "state": "online", 00:21:50.237 "raid_level": "raid1", 00:21:50.237 "superblock": false, 00:21:50.237 "num_base_bdevs": 4, 00:21:50.237 "num_base_bdevs_discovered": 3, 00:21:50.237 "num_base_bdevs_operational": 3, 00:21:50.237 "base_bdevs_list": [ 00:21:50.237 { 00:21:50.237 "name": "spare", 00:21:50.237 "uuid": "4f7430f4-0884-54d7-ae81-0019e0c3e84d", 00:21:50.237 "is_configured": true, 00:21:50.237 "data_offset": 0, 00:21:50.237 "data_size": 65536 00:21:50.237 }, 00:21:50.237 { 00:21:50.237 "name": null, 00:21:50.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.237 "is_configured": false, 00:21:50.237 "data_offset": 0, 00:21:50.237 "data_size": 65536 00:21:50.237 }, 00:21:50.237 { 00:21:50.237 "name": "BaseBdev3", 00:21:50.237 "uuid": "b841b281-0a4d-44ce-8b17-1dc7222240f3", 00:21:50.237 "is_configured": true, 00:21:50.237 "data_offset": 0, 00:21:50.237 "data_size": 65536 00:21:50.237 }, 00:21:50.237 { 00:21:50.237 "name": "BaseBdev4", 00:21:50.237 "uuid": "f61cf5f3-5d88-4f3a-9665-ad4fa9a60968", 00:21:50.237 "is_configured": true, 00:21:50.237 "data_offset": 0, 00:21:50.237 "data_size": 65536 00:21:50.237 } 00:21:50.237 ] 00:21:50.237 }' 00:21:50.237 05:40:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:50.237 05:40:54 -- common/autotest_common.sh@10 -- # set +x 00:21:50.802 05:40:54 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:51.060 [2024-10-07 05:40:54.992471] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:51.060 [2024-10-07 05:40:54.992511] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:51.060 [2024-10-07 05:40:54.992618] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:51.060 [2024-10-07 05:40:54.992712] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:51.060 [2024-10-07 05:40:54.992738] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:21:51.060 05:40:55 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.060 05:40:55 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:51.318 05:40:55 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:51.318 05:40:55 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:21:51.318 05:40:55 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:51.318 05:40:55 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:51.318 05:40:55 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:51.318 05:40:55 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:51.318 05:40:55 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:51.318 05:40:55 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:51.318 05:40:55 -- bdev/nbd_common.sh@12 -- # local i 00:21:51.318 05:40:55 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:51.318 05:40:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:51.318 05:40:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:51.575 /dev/nbd0 00:21:51.575 05:40:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:51.575 05:40:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:51.575 05:40:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:51.575 05:40:55 -- common/autotest_common.sh@857 -- # local i 00:21:51.576 05:40:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:51.576 05:40:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:51.576 05:40:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:51.576 05:40:55 -- common/autotest_common.sh@861 -- # break 00:21:51.576 05:40:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:51.576 05:40:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:51.576 05:40:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:51.576 1+0 records in 00:21:51.576 1+0 records out 00:21:51.576 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000542223 s, 7.6 MB/s 00:21:51.576 05:40:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:51.576 05:40:55 -- common/autotest_common.sh@874 -- # size=4096 00:21:51.576 05:40:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:51.576 05:40:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:51.576 05:40:55 -- common/autotest_common.sh@877 -- # return 0 00:21:51.576 05:40:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:51.576 05:40:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:51.576 05:40:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:51.834 /dev/nbd1 00:21:51.835 05:40:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:51.835 05:40:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:51.835 05:40:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:51.835 05:40:55 -- common/autotest_common.sh@857 -- # local i 00:21:51.835 05:40:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:51.835 05:40:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:51.835 05:40:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:51.835 05:40:55 -- common/autotest_common.sh@861 -- # break 00:21:51.835 05:40:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:51.835 05:40:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:51.835 05:40:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:51.835 1+0 records in 00:21:51.835 1+0 records out 00:21:51.835 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051961 s, 7.9 MB/s 00:21:51.835 05:40:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:51.835 05:40:55 -- common/autotest_common.sh@874 -- # size=4096 00:21:51.835 05:40:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:51.835 05:40:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:51.835 05:40:55 -- common/autotest_common.sh@877 -- # return 0 00:21:51.835 05:40:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:51.835 05:40:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:51.835 05:40:55 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:52.094 05:40:55 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:52.094 05:40:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:52.094 05:40:55 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:52.094 05:40:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:52.094 05:40:55 -- bdev/nbd_common.sh@51 -- # local i 00:21:52.094 05:40:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:52.094 05:40:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:52.392 05:40:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:52.392 05:40:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:52.392 05:40:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:52.392 05:40:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:52.392 05:40:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:52.392 05:40:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:52.392 05:40:56 -- bdev/nbd_common.sh@41 -- # break 00:21:52.392 05:40:56 -- bdev/nbd_common.sh@45 -- # return 0 00:21:52.392 05:40:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:52.392 05:40:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:52.651 05:40:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:52.651 05:40:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:52.651 05:40:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:52.651 05:40:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:52.651 05:40:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:52.651 05:40:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:52.651 05:40:56 -- bdev/nbd_common.sh@41 -- # break 00:21:52.651 05:40:56 -- bdev/nbd_common.sh@45 -- # return 0 00:21:52.651 05:40:56 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:21:52.651 05:40:56 -- bdev/bdev_raid.sh@709 -- # killprocess 168331 00:21:52.651 05:40:56 -- common/autotest_common.sh@926 -- # '[' -z 168331 ']' 00:21:52.651 05:40:56 -- common/autotest_common.sh@930 -- # kill -0 168331 00:21:52.651 05:40:56 -- common/autotest_common.sh@931 -- # uname 00:21:52.651 05:40:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:52.651 05:40:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 168331 00:21:52.651 05:40:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:52.651 05:40:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:52.651 05:40:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 168331' 00:21:52.651 killing process with pid 168331 00:21:52.651 05:40:56 -- common/autotest_common.sh@945 -- # kill 168331 00:21:52.651 Received shutdown signal, test time was about 60.000000 seconds 00:21:52.651 00:21:52.651 Latency(us) 00:21:52.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.651 =================================================================================================================== 00:21:52.651 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:52.651 05:40:56 -- common/autotest_common.sh@950 -- # wait 168331 00:21:52.651 [2024-10-07 05:40:56.490447] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:52.909 [2024-10-07 05:40:56.829060] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:54.287 00:21:54.287 real 0m22.067s 00:21:54.287 user 0m30.413s 00:21:54.287 sys 0m3.674s 00:21:54.287 05:40:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:54.287 05:40:57 -- common/autotest_common.sh@10 -- # set +x 00:21:54.287 ************************************ 00:21:54.287 END TEST raid_rebuild_test 00:21:54.287 ************************************ 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false 00:21:54.287 05:40:57 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:54.287 05:40:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:54.287 05:40:57 -- common/autotest_common.sh@10 -- # set +x 00:21:54.287 ************************************ 00:21:54.287 START TEST raid_rebuild_test_sb 00:21:54.287 ************************************ 00:21:54.287 05:40:57 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true false 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@544 -- # raid_pid=168882 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@545 -- # waitforlisten 168882 /var/tmp/spdk-raid.sock 00:21:54.287 05:40:57 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:54.287 05:40:57 -- common/autotest_common.sh@819 -- # '[' -z 168882 ']' 00:21:54.287 05:40:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:54.287 05:40:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:54.287 05:40:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:54.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:54.287 05:40:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:54.287 05:40:57 -- common/autotest_common.sh@10 -- # set +x 00:21:54.287 [2024-10-07 05:40:57.995493] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:21:54.287 [2024-10-07 05:40:57.995815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168882 ] 00:21:54.287 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:54.287 Zero copy mechanism will not be used. 00:21:54.287 [2024-10-07 05:40:58.146790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.546 [2024-10-07 05:40:58.337428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.805 [2024-10-07 05:40:58.527610] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:55.064 05:40:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:55.064 05:40:58 -- common/autotest_common.sh@852 -- # return 0 00:21:55.064 05:40:58 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:55.064 05:40:58 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:55.064 05:40:58 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:55.326 BaseBdev1_malloc 00:21:55.326 05:40:59 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:55.585 [2024-10-07 05:40:59.407311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:55.585 [2024-10-07 05:40:59.407550] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:55.585 [2024-10-07 05:40:59.407631] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:21:55.585 [2024-10-07 05:40:59.407802] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:55.585 [2024-10-07 05:40:59.410195] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:55.585 [2024-10-07 05:40:59.410364] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:55.585 BaseBdev1 00:21:55.585 05:40:59 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:55.585 05:40:59 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:55.585 05:40:59 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:55.844 BaseBdev2_malloc 00:21:55.844 05:40:59 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:56.103 [2024-10-07 05:40:59.948610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:56.103 [2024-10-07 05:40:59.948815] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.103 [2024-10-07 05:40:59.948900] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:56.103 [2024-10-07 05:40:59.949063] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.103 [2024-10-07 05:40:59.951505] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.103 [2024-10-07 05:40:59.951696] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:56.103 BaseBdev2 00:21:56.103 05:40:59 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:56.103 05:40:59 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:56.103 05:40:59 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:56.362 BaseBdev3_malloc 00:21:56.363 05:41:00 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:56.620 [2024-10-07 05:41:00.435518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:56.620 [2024-10-07 05:41:00.435728] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.620 [2024-10-07 05:41:00.435811] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:21:56.620 [2024-10-07 05:41:00.436010] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.620 [2024-10-07 05:41:00.438447] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.620 [2024-10-07 05:41:00.438645] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:56.620 BaseBdev3 00:21:56.620 05:41:00 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:56.620 05:41:00 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:56.620 05:41:00 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:56.878 BaseBdev4_malloc 00:21:56.878 05:41:00 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:57.136 [2024-10-07 05:41:00.908666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:57.136 [2024-10-07 05:41:00.908880] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:57.136 [2024-10-07 05:41:00.908956] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:21:57.136 [2024-10-07 05:41:00.909219] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:57.136 [2024-10-07 05:41:00.911707] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:57.136 [2024-10-07 05:41:00.911880] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:57.136 BaseBdev4 00:21:57.136 05:41:00 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:57.395 spare_malloc 00:21:57.395 05:41:01 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:57.395 spare_delay 00:21:57.395 05:41:01 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:57.653 [2024-10-07 05:41:01.571332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:57.653 [2024-10-07 05:41:01.571554] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:57.653 [2024-10-07 05:41:01.571627] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:57.653 [2024-10-07 05:41:01.571780] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:57.653 [2024-10-07 05:41:01.574181] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:57.653 [2024-10-07 05:41:01.574382] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:57.653 spare 00:21:57.653 05:41:01 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:57.912 [2024-10-07 05:41:01.751446] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:57.912 [2024-10-07 05:41:01.753605] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:57.912 [2024-10-07 05:41:01.753820] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:57.912 [2024-10-07 05:41:01.753923] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:57.912 [2024-10-07 05:41:01.754231] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:21:57.912 [2024-10-07 05:41:01.754282] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:57.912 [2024-10-07 05:41:01.754527] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:21:57.912 [2024-10-07 05:41:01.755149] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:21:57.912 [2024-10-07 05:41:01.755329] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:21:57.912 [2024-10-07 05:41:01.755600] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:57.912 05:41:01 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:57.912 05:41:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:57.912 05:41:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:57.912 05:41:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:57.912 05:41:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:57.912 05:41:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:57.912 05:41:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:57.912 05:41:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:57.912 05:41:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:57.912 05:41:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:57.912 05:41:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.912 05:41:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.171 05:41:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:58.171 "name": "raid_bdev1", 00:21:58.171 "uuid": "47d39f48-c4c4-4e24-b37f-35eb71466e0d", 00:21:58.171 "strip_size_kb": 0, 00:21:58.171 "state": "online", 00:21:58.171 "raid_level": "raid1", 00:21:58.171 "superblock": true, 00:21:58.171 "num_base_bdevs": 4, 00:21:58.171 "num_base_bdevs_discovered": 4, 00:21:58.171 "num_base_bdevs_operational": 4, 00:21:58.171 "base_bdevs_list": [ 00:21:58.171 { 00:21:58.171 "name": "BaseBdev1", 00:21:58.171 "uuid": "d3cabaa9-84d2-55bf-9550-eb6844da32d2", 00:21:58.171 "is_configured": true, 00:21:58.171 "data_offset": 2048, 00:21:58.171 "data_size": 63488 00:21:58.171 }, 00:21:58.171 { 00:21:58.171 "name": "BaseBdev2", 00:21:58.171 "uuid": "a0f4f807-0f39-57f1-8dd7-e802f9c76a96", 00:21:58.171 "is_configured": true, 00:21:58.171 "data_offset": 2048, 00:21:58.171 "data_size": 63488 00:21:58.171 }, 00:21:58.171 { 00:21:58.171 "name": "BaseBdev3", 00:21:58.171 "uuid": "0fc27d81-a3fb-5525-bb04-e41db994c009", 00:21:58.171 "is_configured": true, 00:21:58.171 "data_offset": 2048, 00:21:58.171 "data_size": 63488 00:21:58.171 }, 00:21:58.171 { 00:21:58.171 "name": "BaseBdev4", 00:21:58.171 "uuid": "2a623c5f-e596-5bd3-a2dd-d245b88c9e5e", 00:21:58.171 "is_configured": true, 00:21:58.171 "data_offset": 2048, 00:21:58.171 "data_size": 63488 00:21:58.171 } 00:21:58.171 ] 00:21:58.171 }' 00:21:58.171 05:41:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:58.171 05:41:01 -- common/autotest_common.sh@10 -- # set +x 00:21:58.738 05:41:02 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:58.738 05:41:02 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:59.002 [2024-10-07 05:41:02.755952] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:59.002 05:41:02 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:21:59.002 05:41:02 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:59.002 05:41:02 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.260 05:41:03 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:21:59.260 05:41:03 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:21:59.260 05:41:03 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:21:59.260 05:41:03 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:59.260 05:41:03 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:59.260 05:41:03 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:59.260 05:41:03 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:59.260 05:41:03 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:59.260 05:41:03 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:59.260 05:41:03 -- bdev/nbd_common.sh@12 -- # local i 00:21:59.260 05:41:03 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:59.260 05:41:03 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:59.260 05:41:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:59.520 [2024-10-07 05:41:03.255869] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:59.520 /dev/nbd0 00:21:59.520 05:41:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:59.520 05:41:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:59.520 05:41:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:59.520 05:41:03 -- common/autotest_common.sh@857 -- # local i 00:21:59.520 05:41:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:59.520 05:41:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:59.520 05:41:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:59.520 05:41:03 -- common/autotest_common.sh@861 -- # break 00:21:59.520 05:41:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:59.520 05:41:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:59.520 05:41:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:59.520 1+0 records in 00:21:59.520 1+0 records out 00:21:59.520 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436414 s, 9.4 MB/s 00:21:59.520 05:41:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:59.520 05:41:03 -- common/autotest_common.sh@874 -- # size=4096 00:21:59.520 05:41:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:59.520 05:41:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:59.520 05:41:03 -- common/autotest_common.sh@877 -- # return 0 00:21:59.520 05:41:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:59.520 05:41:03 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:59.520 05:41:03 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:21:59.520 05:41:03 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:21:59.520 05:41:03 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:22:06.101 63488+0 records in 00:22:06.101 63488+0 records out 00:22:06.101 32505856 bytes (33 MB, 31 MiB) copied, 6.14212 s, 5.3 MB/s 00:22:06.101 05:41:09 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:06.101 05:41:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:06.101 05:41:09 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:06.101 05:41:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:06.101 05:41:09 -- bdev/nbd_common.sh@51 -- # local i 00:22:06.101 05:41:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:06.101 05:41:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:06.101 05:41:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:06.101 05:41:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:06.101 05:41:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:06.101 05:41:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:06.101 05:41:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:06.101 05:41:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:06.101 [2024-10-07 05:41:09.711052] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:06.101 05:41:09 -- bdev/nbd_common.sh@41 -- # break 00:22:06.101 05:41:09 -- bdev/nbd_common.sh@45 -- # return 0 00:22:06.101 05:41:09 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:06.101 [2024-10-07 05:41:09.878772] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:06.102 05:41:09 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:06.102 05:41:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:06.102 05:41:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:06.102 05:41:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:06.102 05:41:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:06.102 05:41:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:06.102 05:41:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:06.102 05:41:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:06.102 05:41:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:06.102 05:41:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:06.102 05:41:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.102 05:41:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.102 05:41:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:06.102 "name": "raid_bdev1", 00:22:06.102 "uuid": "47d39f48-c4c4-4e24-b37f-35eb71466e0d", 00:22:06.102 "strip_size_kb": 0, 00:22:06.102 "state": "online", 00:22:06.102 "raid_level": "raid1", 00:22:06.102 "superblock": true, 00:22:06.102 "num_base_bdevs": 4, 00:22:06.102 "num_base_bdevs_discovered": 3, 00:22:06.102 "num_base_bdevs_operational": 3, 00:22:06.102 "base_bdevs_list": [ 00:22:06.102 { 00:22:06.102 "name": null, 00:22:06.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.102 "is_configured": false, 00:22:06.102 "data_offset": 2048, 00:22:06.102 "data_size": 63488 00:22:06.102 }, 00:22:06.102 { 00:22:06.102 "name": "BaseBdev2", 00:22:06.102 "uuid": "a0f4f807-0f39-57f1-8dd7-e802f9c76a96", 00:22:06.102 "is_configured": true, 00:22:06.102 "data_offset": 2048, 00:22:06.102 "data_size": 63488 00:22:06.102 }, 00:22:06.102 { 00:22:06.102 "name": "BaseBdev3", 00:22:06.102 "uuid": "0fc27d81-a3fb-5525-bb04-e41db994c009", 00:22:06.102 "is_configured": true, 00:22:06.102 "data_offset": 2048, 00:22:06.102 "data_size": 63488 00:22:06.102 }, 00:22:06.102 { 00:22:06.102 "name": "BaseBdev4", 00:22:06.102 "uuid": "2a623c5f-e596-5bd3-a2dd-d245b88c9e5e", 00:22:06.102 "is_configured": true, 00:22:06.102 "data_offset": 2048, 00:22:06.102 "data_size": 63488 00:22:06.102 } 00:22:06.102 ] 00:22:06.102 }' 00:22:06.102 05:41:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:06.102 05:41:10 -- common/autotest_common.sh@10 -- # set +x 00:22:06.670 05:41:10 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:06.929 [2024-10-07 05:41:10.790976] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:06.929 [2024-10-07 05:41:10.791187] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:06.929 [2024-10-07 05:41:10.802078] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca31c0 00:22:06.929 [2024-10-07 05:41:10.804368] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:06.929 05:41:10 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:07.866 05:41:11 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:07.866 05:41:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:07.866 05:41:11 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:07.866 05:41:11 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:07.866 05:41:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:07.866 05:41:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.866 05:41:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.125 05:41:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:08.125 "name": "raid_bdev1", 00:22:08.125 "uuid": "47d39f48-c4c4-4e24-b37f-35eb71466e0d", 00:22:08.125 "strip_size_kb": 0, 00:22:08.125 "state": "online", 00:22:08.125 "raid_level": "raid1", 00:22:08.125 "superblock": true, 00:22:08.125 "num_base_bdevs": 4, 00:22:08.125 "num_base_bdevs_discovered": 4, 00:22:08.125 "num_base_bdevs_operational": 4, 00:22:08.125 "process": { 00:22:08.125 "type": "rebuild", 00:22:08.125 "target": "spare", 00:22:08.125 "progress": { 00:22:08.125 "blocks": 24576, 00:22:08.125 "percent": 38 00:22:08.125 } 00:22:08.125 }, 00:22:08.125 "base_bdevs_list": [ 00:22:08.125 { 00:22:08.125 "name": "spare", 00:22:08.126 "uuid": "3140b169-e419-528b-83be-d65c70f70e4c", 00:22:08.126 "is_configured": true, 00:22:08.126 "data_offset": 2048, 00:22:08.126 "data_size": 63488 00:22:08.126 }, 00:22:08.126 { 00:22:08.126 "name": "BaseBdev2", 00:22:08.126 "uuid": "a0f4f807-0f39-57f1-8dd7-e802f9c76a96", 00:22:08.126 "is_configured": true, 00:22:08.126 "data_offset": 2048, 00:22:08.126 "data_size": 63488 00:22:08.126 }, 00:22:08.126 { 00:22:08.126 "name": "BaseBdev3", 00:22:08.126 "uuid": "0fc27d81-a3fb-5525-bb04-e41db994c009", 00:22:08.126 "is_configured": true, 00:22:08.126 "data_offset": 2048, 00:22:08.126 "data_size": 63488 00:22:08.126 }, 00:22:08.126 { 00:22:08.126 "name": "BaseBdev4", 00:22:08.126 "uuid": "2a623c5f-e596-5bd3-a2dd-d245b88c9e5e", 00:22:08.126 "is_configured": true, 00:22:08.126 "data_offset": 2048, 00:22:08.126 "data_size": 63488 00:22:08.126 } 00:22:08.126 ] 00:22:08.126 }' 00:22:08.126 05:41:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:08.385 05:41:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:08.385 05:41:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:08.385 05:41:12 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:08.385 05:41:12 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:08.644 [2024-10-07 05:41:12.390305] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:08.644 [2024-10-07 05:41:12.414960] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:08.644 [2024-10-07 05:41:12.415162] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:08.644 05:41:12 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:08.644 05:41:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:08.644 05:41:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:08.644 05:41:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:08.644 05:41:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:08.644 05:41:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:08.644 05:41:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:08.644 05:41:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:08.644 05:41:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:08.644 05:41:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:08.644 05:41:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.644 05:41:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.903 05:41:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:08.903 "name": "raid_bdev1", 00:22:08.903 "uuid": "47d39f48-c4c4-4e24-b37f-35eb71466e0d", 00:22:08.903 "strip_size_kb": 0, 00:22:08.903 "state": "online", 00:22:08.903 "raid_level": "raid1", 00:22:08.903 "superblock": true, 00:22:08.903 "num_base_bdevs": 4, 00:22:08.903 "num_base_bdevs_discovered": 3, 00:22:08.903 "num_base_bdevs_operational": 3, 00:22:08.903 "base_bdevs_list": [ 00:22:08.903 { 00:22:08.903 "name": null, 00:22:08.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.903 "is_configured": false, 00:22:08.903 "data_offset": 2048, 00:22:08.903 "data_size": 63488 00:22:08.903 }, 00:22:08.903 { 00:22:08.903 "name": "BaseBdev2", 00:22:08.903 "uuid": "a0f4f807-0f39-57f1-8dd7-e802f9c76a96", 00:22:08.903 "is_configured": true, 00:22:08.903 "data_offset": 2048, 00:22:08.903 "data_size": 63488 00:22:08.903 }, 00:22:08.903 { 00:22:08.903 "name": "BaseBdev3", 00:22:08.903 "uuid": "0fc27d81-a3fb-5525-bb04-e41db994c009", 00:22:08.903 "is_configured": true, 00:22:08.903 "data_offset": 2048, 00:22:08.903 "data_size": 63488 00:22:08.903 }, 00:22:08.903 { 00:22:08.903 "name": "BaseBdev4", 00:22:08.903 "uuid": "2a623c5f-e596-5bd3-a2dd-d245b88c9e5e", 00:22:08.903 "is_configured": true, 00:22:08.903 "data_offset": 2048, 00:22:08.903 "data_size": 63488 00:22:08.903 } 00:22:08.903 ] 00:22:08.903 }' 00:22:08.903 05:41:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:08.903 05:41:12 -- common/autotest_common.sh@10 -- # set +x 00:22:09.472 05:41:13 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:09.472 05:41:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:09.472 05:41:13 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:09.472 05:41:13 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:09.472 05:41:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:09.472 05:41:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.472 05:41:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.731 05:41:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:09.731 "name": "raid_bdev1", 00:22:09.731 "uuid": "47d39f48-c4c4-4e24-b37f-35eb71466e0d", 00:22:09.731 "strip_size_kb": 0, 00:22:09.731 "state": "online", 00:22:09.731 "raid_level": "raid1", 00:22:09.731 "superblock": true, 00:22:09.731 "num_base_bdevs": 4, 00:22:09.731 "num_base_bdevs_discovered": 3, 00:22:09.731 "num_base_bdevs_operational": 3, 00:22:09.731 "base_bdevs_list": [ 00:22:09.731 { 00:22:09.731 "name": null, 00:22:09.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.731 "is_configured": false, 00:22:09.731 "data_offset": 2048, 00:22:09.731 "data_size": 63488 00:22:09.731 }, 00:22:09.731 { 00:22:09.731 "name": "BaseBdev2", 00:22:09.731 "uuid": "a0f4f807-0f39-57f1-8dd7-e802f9c76a96", 00:22:09.731 "is_configured": true, 00:22:09.731 "data_offset": 2048, 00:22:09.731 "data_size": 63488 00:22:09.731 }, 00:22:09.731 { 00:22:09.731 "name": "BaseBdev3", 00:22:09.731 "uuid": "0fc27d81-a3fb-5525-bb04-e41db994c009", 00:22:09.731 "is_configured": true, 00:22:09.731 "data_offset": 2048, 00:22:09.731 "data_size": 63488 00:22:09.731 }, 00:22:09.731 { 00:22:09.731 "name": "BaseBdev4", 00:22:09.731 "uuid": "2a623c5f-e596-5bd3-a2dd-d245b88c9e5e", 00:22:09.731 "is_configured": true, 00:22:09.731 "data_offset": 2048, 00:22:09.731 "data_size": 63488 00:22:09.731 } 00:22:09.731 ] 00:22:09.731 }' 00:22:09.731 05:41:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:09.731 05:41:13 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:09.731 05:41:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:09.731 05:41:13 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:09.731 05:41:13 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:09.990 [2024-10-07 05:41:13.790303] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:09.990 [2024-10-07 05:41:13.790465] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:09.990 [2024-10-07 05:41:13.800293] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:22:09.990 [2024-10-07 05:41:13.802439] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:09.990 05:41:13 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:10.937 05:41:14 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:10.937 05:41:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:10.937 05:41:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:10.937 05:41:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:10.937 05:41:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:10.937 05:41:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.937 05:41:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.217 05:41:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:11.217 "name": "raid_bdev1", 00:22:11.217 "uuid": "47d39f48-c4c4-4e24-b37f-35eb71466e0d", 00:22:11.217 "strip_size_kb": 0, 00:22:11.217 "state": "online", 00:22:11.217 "raid_level": "raid1", 00:22:11.217 "superblock": true, 00:22:11.217 "num_base_bdevs": 4, 00:22:11.217 "num_base_bdevs_discovered": 4, 00:22:11.217 "num_base_bdevs_operational": 4, 00:22:11.217 "process": { 00:22:11.217 "type": "rebuild", 00:22:11.217 "target": "spare", 00:22:11.217 "progress": { 00:22:11.217 "blocks": 24576, 00:22:11.217 "percent": 38 00:22:11.217 } 00:22:11.217 }, 00:22:11.217 "base_bdevs_list": [ 00:22:11.217 { 00:22:11.217 "name": "spare", 00:22:11.217 "uuid": "3140b169-e419-528b-83be-d65c70f70e4c", 00:22:11.217 "is_configured": true, 00:22:11.217 "data_offset": 2048, 00:22:11.217 "data_size": 63488 00:22:11.217 }, 00:22:11.217 { 00:22:11.217 "name": "BaseBdev2", 00:22:11.217 "uuid": "a0f4f807-0f39-57f1-8dd7-e802f9c76a96", 00:22:11.217 "is_configured": true, 00:22:11.217 "data_offset": 2048, 00:22:11.217 "data_size": 63488 00:22:11.217 }, 00:22:11.217 { 00:22:11.217 "name": "BaseBdev3", 00:22:11.218 "uuid": "0fc27d81-a3fb-5525-bb04-e41db994c009", 00:22:11.218 "is_configured": true, 00:22:11.218 "data_offset": 2048, 00:22:11.218 "data_size": 63488 00:22:11.218 }, 00:22:11.218 { 00:22:11.218 "name": "BaseBdev4", 00:22:11.218 "uuid": "2a623c5f-e596-5bd3-a2dd-d245b88c9e5e", 00:22:11.218 "is_configured": true, 00:22:11.218 "data_offset": 2048, 00:22:11.218 "data_size": 63488 00:22:11.218 } 00:22:11.218 ] 00:22:11.218 }' 00:22:11.218 05:41:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:11.218 05:41:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:11.218 05:41:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:11.218 05:41:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:11.218 05:41:15 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:22:11.218 05:41:15 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:22:11.218 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:22:11.218 05:41:15 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:11.218 05:41:15 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:11.218 05:41:15 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:11.218 05:41:15 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:11.492 [2024-10-07 05:41:15.384936] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:11.492 [2024-10-07 05:41:15.412682] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca3360 00:22:11.751 05:41:15 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:11.751 05:41:15 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:11.751 05:41:15 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:11.751 05:41:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:11.751 05:41:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:11.751 05:41:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:11.751 05:41:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:11.751 05:41:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.751 05:41:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.010 05:41:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:12.010 "name": "raid_bdev1", 00:22:12.010 "uuid": "47d39f48-c4c4-4e24-b37f-35eb71466e0d", 00:22:12.010 "strip_size_kb": 0, 00:22:12.010 "state": "online", 00:22:12.010 "raid_level": "raid1", 00:22:12.010 "superblock": true, 00:22:12.010 "num_base_bdevs": 4, 00:22:12.010 "num_base_bdevs_discovered": 3, 00:22:12.010 "num_base_bdevs_operational": 3, 00:22:12.010 "process": { 00:22:12.010 "type": "rebuild", 00:22:12.010 "target": "spare", 00:22:12.010 "progress": { 00:22:12.010 "blocks": 38912, 00:22:12.010 "percent": 61 00:22:12.010 } 00:22:12.010 }, 00:22:12.010 "base_bdevs_list": [ 00:22:12.010 { 00:22:12.010 "name": "spare", 00:22:12.010 "uuid": "3140b169-e419-528b-83be-d65c70f70e4c", 00:22:12.010 "is_configured": true, 00:22:12.010 "data_offset": 2048, 00:22:12.010 "data_size": 63488 00:22:12.010 }, 00:22:12.010 { 00:22:12.010 "name": null, 00:22:12.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.010 "is_configured": false, 00:22:12.010 "data_offset": 2048, 00:22:12.010 "data_size": 63488 00:22:12.010 }, 00:22:12.010 { 00:22:12.010 "name": "BaseBdev3", 00:22:12.010 "uuid": "0fc27d81-a3fb-5525-bb04-e41db994c009", 00:22:12.010 "is_configured": true, 00:22:12.010 "data_offset": 2048, 00:22:12.010 "data_size": 63488 00:22:12.010 }, 00:22:12.010 { 00:22:12.010 "name": "BaseBdev4", 00:22:12.010 "uuid": "2a623c5f-e596-5bd3-a2dd-d245b88c9e5e", 00:22:12.010 "is_configured": true, 00:22:12.010 "data_offset": 2048, 00:22:12.010 "data_size": 63488 00:22:12.010 } 00:22:12.010 ] 00:22:12.010 }' 00:22:12.010 05:41:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:12.010 05:41:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:12.010 05:41:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:12.010 05:41:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:12.010 05:41:15 -- bdev/bdev_raid.sh@657 -- # local timeout=515 00:22:12.010 05:41:15 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:12.010 05:41:15 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:12.010 05:41:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:12.010 05:41:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:12.010 05:41:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:12.010 05:41:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:12.010 05:41:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.010 05:41:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.269 05:41:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:12.269 "name": "raid_bdev1", 00:22:12.269 "uuid": "47d39f48-c4c4-4e24-b37f-35eb71466e0d", 00:22:12.269 "strip_size_kb": 0, 00:22:12.269 "state": "online", 00:22:12.269 "raid_level": "raid1", 00:22:12.269 "superblock": true, 00:22:12.269 "num_base_bdevs": 4, 00:22:12.269 "num_base_bdevs_discovered": 3, 00:22:12.269 "num_base_bdevs_operational": 3, 00:22:12.269 "process": { 00:22:12.269 "type": "rebuild", 00:22:12.269 "target": "spare", 00:22:12.269 "progress": { 00:22:12.269 "blocks": 45056, 00:22:12.269 "percent": 70 00:22:12.269 } 00:22:12.269 }, 00:22:12.269 "base_bdevs_list": [ 00:22:12.269 { 00:22:12.269 "name": "spare", 00:22:12.269 "uuid": "3140b169-e419-528b-83be-d65c70f70e4c", 00:22:12.269 "is_configured": true, 00:22:12.269 "data_offset": 2048, 00:22:12.269 "data_size": 63488 00:22:12.269 }, 00:22:12.269 { 00:22:12.269 "name": null, 00:22:12.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.269 "is_configured": false, 00:22:12.269 "data_offset": 2048, 00:22:12.269 "data_size": 63488 00:22:12.269 }, 00:22:12.269 { 00:22:12.269 "name": "BaseBdev3", 00:22:12.269 "uuid": "0fc27d81-a3fb-5525-bb04-e41db994c009", 00:22:12.269 "is_configured": true, 00:22:12.269 "data_offset": 2048, 00:22:12.269 "data_size": 63488 00:22:12.269 }, 00:22:12.269 { 00:22:12.269 "name": "BaseBdev4", 00:22:12.269 "uuid": "2a623c5f-e596-5bd3-a2dd-d245b88c9e5e", 00:22:12.269 "is_configured": true, 00:22:12.269 "data_offset": 2048, 00:22:12.269 "data_size": 63488 00:22:12.269 } 00:22:12.269 ] 00:22:12.269 }' 00:22:12.269 05:41:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:12.269 05:41:16 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:12.269 05:41:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:12.269 05:41:16 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:12.269 05:41:16 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:13.205 [2024-10-07 05:41:16.922478] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:13.205 [2024-10-07 05:41:16.922758] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:13.205 [2024-10-07 05:41:16.923039] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:13.464 05:41:17 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:13.464 05:41:17 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:13.464 05:41:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:13.464 05:41:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:13.464 05:41:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:13.464 05:41:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:13.464 05:41:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.464 05:41:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.464 05:41:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:13.464 "name": "raid_bdev1", 00:22:13.464 "uuid": "47d39f48-c4c4-4e24-b37f-35eb71466e0d", 00:22:13.464 "strip_size_kb": 0, 00:22:13.464 "state": "online", 00:22:13.464 "raid_level": "raid1", 00:22:13.464 "superblock": true, 00:22:13.464 "num_base_bdevs": 4, 00:22:13.464 "num_base_bdevs_discovered": 3, 00:22:13.464 "num_base_bdevs_operational": 3, 00:22:13.464 "base_bdevs_list": [ 00:22:13.464 { 00:22:13.464 "name": "spare", 00:22:13.464 "uuid": "3140b169-e419-528b-83be-d65c70f70e4c", 00:22:13.464 "is_configured": true, 00:22:13.464 "data_offset": 2048, 00:22:13.464 "data_size": 63488 00:22:13.464 }, 00:22:13.464 { 00:22:13.464 "name": null, 00:22:13.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.464 "is_configured": false, 00:22:13.464 "data_offset": 2048, 00:22:13.464 "data_size": 63488 00:22:13.464 }, 00:22:13.464 { 00:22:13.464 "name": "BaseBdev3", 00:22:13.464 "uuid": "0fc27d81-a3fb-5525-bb04-e41db994c009", 00:22:13.464 "is_configured": true, 00:22:13.464 "data_offset": 2048, 00:22:13.464 "data_size": 63488 00:22:13.464 }, 00:22:13.464 { 00:22:13.464 "name": "BaseBdev4", 00:22:13.464 "uuid": "2a623c5f-e596-5bd3-a2dd-d245b88c9e5e", 00:22:13.464 "is_configured": true, 00:22:13.464 "data_offset": 2048, 00:22:13.464 "data_size": 63488 00:22:13.464 } 00:22:13.464 ] 00:22:13.464 }' 00:22:13.464 05:41:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:13.722 05:41:17 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:13.722 05:41:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:13.722 05:41:17 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:13.722 05:41:17 -- bdev/bdev_raid.sh@660 -- # break 00:22:13.722 05:41:17 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:13.722 05:41:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:13.722 05:41:17 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:13.722 05:41:17 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:13.722 05:41:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:13.722 05:41:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.722 05:41:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.981 05:41:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:13.981 "name": "raid_bdev1", 00:22:13.981 "uuid": "47d39f48-c4c4-4e24-b37f-35eb71466e0d", 00:22:13.981 "strip_size_kb": 0, 00:22:13.981 "state": "online", 00:22:13.981 "raid_level": "raid1", 00:22:13.981 "superblock": true, 00:22:13.981 "num_base_bdevs": 4, 00:22:13.981 "num_base_bdevs_discovered": 3, 00:22:13.981 "num_base_bdevs_operational": 3, 00:22:13.981 "base_bdevs_list": [ 00:22:13.981 { 00:22:13.981 "name": "spare", 00:22:13.981 "uuid": "3140b169-e419-528b-83be-d65c70f70e4c", 00:22:13.981 "is_configured": true, 00:22:13.981 "data_offset": 2048, 00:22:13.981 "data_size": 63488 00:22:13.981 }, 00:22:13.981 { 00:22:13.981 "name": null, 00:22:13.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.981 "is_configured": false, 00:22:13.981 "data_offset": 2048, 00:22:13.981 "data_size": 63488 00:22:13.981 }, 00:22:13.981 { 00:22:13.981 "name": "BaseBdev3", 00:22:13.981 "uuid": "0fc27d81-a3fb-5525-bb04-e41db994c009", 00:22:13.981 "is_configured": true, 00:22:13.981 "data_offset": 2048, 00:22:13.981 "data_size": 63488 00:22:13.981 }, 00:22:13.981 { 00:22:13.981 "name": "BaseBdev4", 00:22:13.981 "uuid": "2a623c5f-e596-5bd3-a2dd-d245b88c9e5e", 00:22:13.981 "is_configured": true, 00:22:13.981 "data_offset": 2048, 00:22:13.981 "data_size": 63488 00:22:13.981 } 00:22:13.981 ] 00:22:13.981 }' 00:22:13.981 05:41:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:13.981 05:41:17 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:13.981 05:41:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:13.981 05:41:17 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:13.981 05:41:17 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:13.981 05:41:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:13.981 05:41:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:13.981 05:41:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:13.981 05:41:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:13.981 05:41:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:13.981 05:41:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:13.981 05:41:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:13.981 05:41:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:13.981 05:41:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:13.981 05:41:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.981 05:41:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.240 05:41:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:14.240 "name": "raid_bdev1", 00:22:14.240 "uuid": "47d39f48-c4c4-4e24-b37f-35eb71466e0d", 00:22:14.240 "strip_size_kb": 0, 00:22:14.240 "state": "online", 00:22:14.240 "raid_level": "raid1", 00:22:14.240 "superblock": true, 00:22:14.240 "num_base_bdevs": 4, 00:22:14.240 "num_base_bdevs_discovered": 3, 00:22:14.240 "num_base_bdevs_operational": 3, 00:22:14.240 "base_bdevs_list": [ 00:22:14.240 { 00:22:14.240 "name": "spare", 00:22:14.240 "uuid": "3140b169-e419-528b-83be-d65c70f70e4c", 00:22:14.240 "is_configured": true, 00:22:14.240 "data_offset": 2048, 00:22:14.240 "data_size": 63488 00:22:14.240 }, 00:22:14.240 { 00:22:14.240 "name": null, 00:22:14.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.240 "is_configured": false, 00:22:14.240 "data_offset": 2048, 00:22:14.240 "data_size": 63488 00:22:14.240 }, 00:22:14.240 { 00:22:14.240 "name": "BaseBdev3", 00:22:14.240 "uuid": "0fc27d81-a3fb-5525-bb04-e41db994c009", 00:22:14.240 "is_configured": true, 00:22:14.240 "data_offset": 2048, 00:22:14.240 "data_size": 63488 00:22:14.240 }, 00:22:14.240 { 00:22:14.240 "name": "BaseBdev4", 00:22:14.240 "uuid": "2a623c5f-e596-5bd3-a2dd-d245b88c9e5e", 00:22:14.240 "is_configured": true, 00:22:14.240 "data_offset": 2048, 00:22:14.240 "data_size": 63488 00:22:14.240 } 00:22:14.240 ] 00:22:14.240 }' 00:22:14.240 05:41:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:14.240 05:41:18 -- common/autotest_common.sh@10 -- # set +x 00:22:14.808 05:41:18 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:15.066 [2024-10-07 05:41:18.942031] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:15.066 [2024-10-07 05:41:18.942191] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:15.066 [2024-10-07 05:41:18.942381] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:15.066 [2024-10-07 05:41:18.942620] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:15.066 [2024-10-07 05:41:18.942766] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:22:15.066 05:41:18 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.066 05:41:18 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:15.325 05:41:19 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:15.325 05:41:19 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:22:15.325 05:41:19 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:15.325 05:41:19 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:15.325 05:41:19 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:15.325 05:41:19 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:15.325 05:41:19 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:15.325 05:41:19 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:15.325 05:41:19 -- bdev/nbd_common.sh@12 -- # local i 00:22:15.325 05:41:19 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:15.325 05:41:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:15.325 05:41:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:15.584 /dev/nbd0 00:22:15.584 05:41:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:15.584 05:41:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:15.584 05:41:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:15.584 05:41:19 -- common/autotest_common.sh@857 -- # local i 00:22:15.584 05:41:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:15.584 05:41:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:15.584 05:41:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:15.584 05:41:19 -- common/autotest_common.sh@861 -- # break 00:22:15.584 05:41:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:15.584 05:41:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:15.584 05:41:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:15.584 1+0 records in 00:22:15.584 1+0 records out 00:22:15.584 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000740822 s, 5.5 MB/s 00:22:15.584 05:41:19 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:15.584 05:41:19 -- common/autotest_common.sh@874 -- # size=4096 00:22:15.584 05:41:19 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:15.584 05:41:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:15.584 05:41:19 -- common/autotest_common.sh@877 -- # return 0 00:22:15.584 05:41:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:15.584 05:41:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:15.584 05:41:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:15.844 /dev/nbd1 00:22:15.844 05:41:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:15.844 05:41:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:15.844 05:41:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:15.844 05:41:19 -- common/autotest_common.sh@857 -- # local i 00:22:15.844 05:41:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:15.844 05:41:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:15.844 05:41:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:15.844 05:41:19 -- common/autotest_common.sh@861 -- # break 00:22:15.844 05:41:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:15.844 05:41:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:15.844 05:41:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:15.844 1+0 records in 00:22:15.844 1+0 records out 00:22:15.844 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383531 s, 10.7 MB/s 00:22:15.844 05:41:19 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:15.844 05:41:19 -- common/autotest_common.sh@874 -- # size=4096 00:22:15.844 05:41:19 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:15.844 05:41:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:15.844 05:41:19 -- common/autotest_common.sh@877 -- # return 0 00:22:15.844 05:41:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:15.844 05:41:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:15.844 05:41:19 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:16.103 05:41:19 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:16.103 05:41:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:16.103 05:41:19 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:16.103 05:41:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:16.103 05:41:19 -- bdev/nbd_common.sh@51 -- # local i 00:22:16.103 05:41:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:16.103 05:41:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:16.362 05:41:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:16.362 05:41:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:16.362 05:41:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:16.362 05:41:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:16.362 05:41:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:16.362 05:41:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:16.362 05:41:20 -- bdev/nbd_common.sh@41 -- # break 00:22:16.362 05:41:20 -- bdev/nbd_common.sh@45 -- # return 0 00:22:16.362 05:41:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:16.362 05:41:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:16.621 05:41:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:16.621 05:41:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:16.621 05:41:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:16.621 05:41:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:16.621 05:41:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:16.621 05:41:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:16.621 05:41:20 -- bdev/nbd_common.sh@41 -- # break 00:22:16.621 05:41:20 -- bdev/nbd_common.sh@45 -- # return 0 00:22:16.621 05:41:20 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:22:16.621 05:41:20 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:16.621 05:41:20 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:22:16.621 05:41:20 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:16.880 05:41:20 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:17.139 [2024-10-07 05:41:20.890959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:17.139 [2024-10-07 05:41:20.891228] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.139 [2024-10-07 05:41:20.891311] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:22:17.139 [2024-10-07 05:41:20.891590] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.139 [2024-10-07 05:41:20.894008] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.139 [2024-10-07 05:41:20.894202] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:17.139 [2024-10-07 05:41:20.894423] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:17.139 [2024-10-07 05:41:20.894596] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:17.139 BaseBdev1 00:22:17.139 05:41:20 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:17.139 05:41:20 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:22:17.139 05:41:20 -- bdev/bdev_raid.sh@696 -- # continue 00:22:17.139 05:41:20 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:17.139 05:41:20 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:22:17.139 05:41:20 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:22:17.398 05:41:21 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:17.657 [2024-10-07 05:41:21.415062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:17.657 [2024-10-07 05:41:21.415246] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.657 [2024-10-07 05:41:21.415322] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:22:17.657 [2024-10-07 05:41:21.415441] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.657 [2024-10-07 05:41:21.415867] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.657 [2024-10-07 05:41:21.416058] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:17.657 [2024-10-07 05:41:21.416257] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:22:17.657 [2024-10-07 05:41:21.416388] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:22:17.657 [2024-10-07 05:41:21.416487] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:17.657 [2024-10-07 05:41:21.416549] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:22:17.657 [2024-10-07 05:41:21.416711] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:17.657 BaseBdev3 00:22:17.657 05:41:21 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:17.657 05:41:21 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:22:17.657 05:41:21 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:22:17.657 05:41:21 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:17.916 [2024-10-07 05:41:21.787183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:17.916 [2024-10-07 05:41:21.787398] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.916 [2024-10-07 05:41:21.787475] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:22:17.916 [2024-10-07 05:41:21.787649] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.916 [2024-10-07 05:41:21.788289] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.916 [2024-10-07 05:41:21.788493] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:17.916 [2024-10-07 05:41:21.788683] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:22:17.916 [2024-10-07 05:41:21.788820] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:17.916 BaseBdev4 00:22:17.916 05:41:21 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:18.174 05:41:22 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:18.433 [2024-10-07 05:41:22.195272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:18.433 [2024-10-07 05:41:22.195474] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:18.433 [2024-10-07 05:41:22.195546] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:22:18.433 [2024-10-07 05:41:22.195677] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:18.433 [2024-10-07 05:41:22.196130] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:18.433 [2024-10-07 05:41:22.196310] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:18.433 [2024-10-07 05:41:22.196530] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:22:18.433 [2024-10-07 05:41:22.196674] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:18.433 spare 00:22:18.433 05:41:22 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:18.433 05:41:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:18.433 05:41:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:18.433 05:41:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:18.433 05:41:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:18.433 05:41:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:18.433 05:41:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:18.433 05:41:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:18.433 05:41:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:18.433 05:41:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:18.433 05:41:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.433 05:41:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.433 [2024-10-07 05:41:22.296823] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:22:18.433 [2024-10-07 05:41:22.296973] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:18.433 [2024-10-07 05:41:22.297128] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ef0 00:22:18.433 [2024-10-07 05:41:22.297831] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:22:18.433 [2024-10-07 05:41:22.297967] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:22:18.433 [2024-10-07 05:41:22.298201] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:18.433 05:41:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:18.433 "name": "raid_bdev1", 00:22:18.433 "uuid": "47d39f48-c4c4-4e24-b37f-35eb71466e0d", 00:22:18.433 "strip_size_kb": 0, 00:22:18.433 "state": "online", 00:22:18.433 "raid_level": "raid1", 00:22:18.433 "superblock": true, 00:22:18.433 "num_base_bdevs": 4, 00:22:18.433 "num_base_bdevs_discovered": 3, 00:22:18.433 "num_base_bdevs_operational": 3, 00:22:18.433 "base_bdevs_list": [ 00:22:18.433 { 00:22:18.433 "name": "spare", 00:22:18.433 "uuid": "3140b169-e419-528b-83be-d65c70f70e4c", 00:22:18.433 "is_configured": true, 00:22:18.433 "data_offset": 2048, 00:22:18.433 "data_size": 63488 00:22:18.433 }, 00:22:18.433 { 00:22:18.433 "name": null, 00:22:18.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.433 "is_configured": false, 00:22:18.433 "data_offset": 2048, 00:22:18.433 "data_size": 63488 00:22:18.433 }, 00:22:18.433 { 00:22:18.433 "name": "BaseBdev3", 00:22:18.433 "uuid": "0fc27d81-a3fb-5525-bb04-e41db994c009", 00:22:18.433 "is_configured": true, 00:22:18.433 "data_offset": 2048, 00:22:18.433 "data_size": 63488 00:22:18.433 }, 00:22:18.433 { 00:22:18.433 "name": "BaseBdev4", 00:22:18.433 "uuid": "2a623c5f-e596-5bd3-a2dd-d245b88c9e5e", 00:22:18.433 "is_configured": true, 00:22:18.433 "data_offset": 2048, 00:22:18.433 "data_size": 63488 00:22:18.433 } 00:22:18.433 ] 00:22:18.433 }' 00:22:18.433 05:41:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:18.433 05:41:22 -- common/autotest_common.sh@10 -- # set +x 00:22:19.370 05:41:23 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:19.370 05:41:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:19.370 05:41:23 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:19.370 05:41:23 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:19.370 05:41:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:19.370 05:41:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.370 05:41:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.370 05:41:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:19.370 "name": "raid_bdev1", 00:22:19.370 "uuid": "47d39f48-c4c4-4e24-b37f-35eb71466e0d", 00:22:19.370 "strip_size_kb": 0, 00:22:19.370 "state": "online", 00:22:19.370 "raid_level": "raid1", 00:22:19.370 "superblock": true, 00:22:19.370 "num_base_bdevs": 4, 00:22:19.370 "num_base_bdevs_discovered": 3, 00:22:19.370 "num_base_bdevs_operational": 3, 00:22:19.370 "base_bdevs_list": [ 00:22:19.370 { 00:22:19.370 "name": "spare", 00:22:19.370 "uuid": "3140b169-e419-528b-83be-d65c70f70e4c", 00:22:19.370 "is_configured": true, 00:22:19.370 "data_offset": 2048, 00:22:19.370 "data_size": 63488 00:22:19.370 }, 00:22:19.370 { 00:22:19.370 "name": null, 00:22:19.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.370 "is_configured": false, 00:22:19.370 "data_offset": 2048, 00:22:19.370 "data_size": 63488 00:22:19.370 }, 00:22:19.370 { 00:22:19.370 "name": "BaseBdev3", 00:22:19.370 "uuid": "0fc27d81-a3fb-5525-bb04-e41db994c009", 00:22:19.370 "is_configured": true, 00:22:19.370 "data_offset": 2048, 00:22:19.370 "data_size": 63488 00:22:19.370 }, 00:22:19.370 { 00:22:19.370 "name": "BaseBdev4", 00:22:19.370 "uuid": "2a623c5f-e596-5bd3-a2dd-d245b88c9e5e", 00:22:19.370 "is_configured": true, 00:22:19.370 "data_offset": 2048, 00:22:19.370 "data_size": 63488 00:22:19.370 } 00:22:19.370 ] 00:22:19.370 }' 00:22:19.370 05:41:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:19.370 05:41:23 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:19.370 05:41:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:19.629 05:41:23 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:19.629 05:41:23 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.629 05:41:23 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:19.888 05:41:23 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:22:19.888 05:41:23 -- bdev/bdev_raid.sh@709 -- # killprocess 168882 00:22:19.888 05:41:23 -- common/autotest_common.sh@926 -- # '[' -z 168882 ']' 00:22:19.888 05:41:23 -- common/autotest_common.sh@930 -- # kill -0 168882 00:22:19.888 05:41:23 -- common/autotest_common.sh@931 -- # uname 00:22:19.888 05:41:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:19.888 05:41:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 168882 00:22:19.888 05:41:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:19.888 05:41:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:19.888 05:41:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 168882' 00:22:19.888 killing process with pid 168882 00:22:19.888 05:41:23 -- common/autotest_common.sh@945 -- # kill 168882 00:22:19.888 Received shutdown signal, test time was about 60.000000 seconds 00:22:19.888 00:22:19.888 Latency(us) 00:22:19.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.888 =================================================================================================================== 00:22:19.888 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:19.888 05:41:23 -- common/autotest_common.sh@950 -- # wait 168882 00:22:19.888 [2024-10-07 05:41:23.638591] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:19.888 [2024-10-07 05:41:23.638698] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:19.888 [2024-10-07 05:41:23.638803] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:19.888 [2024-10-07 05:41:23.638917] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:22:20.147 [2024-10-07 05:41:23.978427] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:21.083 ************************************ 00:22:21.083 END TEST raid_rebuild_test_sb 00:22:21.083 ************************************ 00:22:21.083 05:41:25 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:21.083 00:22:21.083 real 0m27.090s 00:22:21.083 user 0m39.203s 00:22:21.083 sys 0m4.440s 00:22:21.083 05:41:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:21.083 05:41:25 -- common/autotest_common.sh@10 -- # set +x 00:22:21.342 05:41:25 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true 00:22:21.342 05:41:25 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:22:21.342 05:41:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:21.342 05:41:25 -- common/autotest_common.sh@10 -- # set +x 00:22:21.342 ************************************ 00:22:21.342 START TEST raid_rebuild_test_io 00:22:21.342 ************************************ 00:22:21.342 05:41:25 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false true 00:22:21.342 05:41:25 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:21.342 05:41:25 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:21.342 05:41:25 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:22:21.342 05:41:25 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:22:21.342 05:41:25 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:21.342 05:41:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:21.342 05:41:25 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:21.342 05:41:25 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:21.342 05:41:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:21.342 05:41:25 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:21.342 05:41:25 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:21.342 05:41:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:21.342 05:41:25 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:21.342 05:41:25 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:21.342 05:41:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:21.342 05:41:25 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:22:21.342 05:41:25 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:21.342 05:41:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:21.342 05:41:25 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:21.343 05:41:25 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:21.343 05:41:25 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:21.343 05:41:25 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:21.343 05:41:25 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:21.343 05:41:25 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:21.343 05:41:25 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:21.343 05:41:25 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:21.343 05:41:25 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:21.343 05:41:25 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:22:21.343 05:41:25 -- bdev/bdev_raid.sh@544 -- # raid_pid=169533 00:22:21.343 05:41:25 -- bdev/bdev_raid.sh@545 -- # waitforlisten 169533 /var/tmp/spdk-raid.sock 00:22:21.343 05:41:25 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:21.343 05:41:25 -- common/autotest_common.sh@819 -- # '[' -z 169533 ']' 00:22:21.343 05:41:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:21.343 05:41:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:21.343 05:41:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:21.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:21.343 05:41:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:21.343 05:41:25 -- common/autotest_common.sh@10 -- # set +x 00:22:21.343 [2024-10-07 05:41:25.167956] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:21.343 [2024-10-07 05:41:25.169181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169533 ] 00:22:21.343 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:21.343 Zero copy mechanism will not be used. 00:22:21.602 [2024-10-07 05:41:25.337542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.602 [2024-10-07 05:41:25.521730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.861 [2024-10-07 05:41:25.708514] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:22.119 05:41:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:22.119 05:41:26 -- common/autotest_common.sh@852 -- # return 0 00:22:22.119 05:41:26 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:22.119 05:41:26 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:22.119 05:41:26 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:22.378 BaseBdev1 00:22:22.637 05:41:26 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:22.637 05:41:26 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:22.637 05:41:26 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:22.637 BaseBdev2 00:22:22.896 05:41:26 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:22.896 05:41:26 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:22.896 05:41:26 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:23.155 BaseBdev3 00:22:23.155 05:41:26 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:23.155 05:41:26 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:23.155 05:41:26 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:23.155 BaseBdev4 00:22:23.155 05:41:27 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:23.414 spare_malloc 00:22:23.672 05:41:27 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:23.672 spare_delay 00:22:23.672 05:41:27 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:23.931 [2024-10-07 05:41:27.765347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:23.931 [2024-10-07 05:41:27.765578] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:23.931 [2024-10-07 05:41:27.765652] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:22:23.931 [2024-10-07 05:41:27.765850] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:23.931 [2024-10-07 05:41:27.768318] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:23.931 [2024-10-07 05:41:27.768489] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:23.931 spare 00:22:23.931 05:41:27 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:24.190 [2024-10-07 05:41:27.961394] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:24.191 [2024-10-07 05:41:27.963611] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:24.191 [2024-10-07 05:41:27.963793] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:24.191 [2024-10-07 05:41:27.963874] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:24.191 [2024-10-07 05:41:27.964057] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:22:24.191 [2024-10-07 05:41:27.964102] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:24.191 [2024-10-07 05:41:27.964340] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:22:24.191 [2024-10-07 05:41:27.964859] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:22:24.191 [2024-10-07 05:41:27.964999] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:22:24.191 [2024-10-07 05:41:27.965238] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:24.191 05:41:27 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:24.191 05:41:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:24.191 05:41:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:24.191 05:41:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:24.191 05:41:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:24.191 05:41:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:24.191 05:41:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:24.191 05:41:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:24.191 05:41:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:24.191 05:41:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:24.191 05:41:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.191 05:41:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.450 05:41:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:24.450 "name": "raid_bdev1", 00:22:24.450 "uuid": "cabef824-4e42-4088-b8f4-6dd1df3a8d9d", 00:22:24.450 "strip_size_kb": 0, 00:22:24.450 "state": "online", 00:22:24.450 "raid_level": "raid1", 00:22:24.450 "superblock": false, 00:22:24.450 "num_base_bdevs": 4, 00:22:24.450 "num_base_bdevs_discovered": 4, 00:22:24.450 "num_base_bdevs_operational": 4, 00:22:24.450 "base_bdevs_list": [ 00:22:24.450 { 00:22:24.450 "name": "BaseBdev1", 00:22:24.450 "uuid": "30367ac7-a59a-4e24-877d-1c1aaecbd976", 00:22:24.450 "is_configured": true, 00:22:24.450 "data_offset": 0, 00:22:24.450 "data_size": 65536 00:22:24.450 }, 00:22:24.450 { 00:22:24.450 "name": "BaseBdev2", 00:22:24.450 "uuid": "d000cc12-bffa-422f-889d-97858689f5e5", 00:22:24.450 "is_configured": true, 00:22:24.450 "data_offset": 0, 00:22:24.450 "data_size": 65536 00:22:24.450 }, 00:22:24.450 { 00:22:24.450 "name": "BaseBdev3", 00:22:24.450 "uuid": "04dc0997-10f9-44b0-9c5f-dc72d5cc8663", 00:22:24.450 "is_configured": true, 00:22:24.450 "data_offset": 0, 00:22:24.450 "data_size": 65536 00:22:24.450 }, 00:22:24.450 { 00:22:24.450 "name": "BaseBdev4", 00:22:24.450 "uuid": "b5538130-3eba-4b62-8c13-de59e8d09ca1", 00:22:24.450 "is_configured": true, 00:22:24.450 "data_offset": 0, 00:22:24.450 "data_size": 65536 00:22:24.450 } 00:22:24.450 ] 00:22:24.450 }' 00:22:24.450 05:41:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:24.450 05:41:28 -- common/autotest_common.sh@10 -- # set +x 00:22:25.016 05:41:28 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:25.016 05:41:28 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:25.016 [2024-10-07 05:41:28.993878] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:25.292 05:41:29 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:22:25.292 05:41:29 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:25.292 05:41:29 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.571 05:41:29 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:22:25.571 05:41:29 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:22:25.572 05:41:29 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:25.572 05:41:29 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:25.572 [2024-10-07 05:41:29.393035] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:22:25.572 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:25.572 Zero copy mechanism will not be used. 00:22:25.572 Running I/O for 60 seconds... 00:22:25.572 [2024-10-07 05:41:29.456764] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:25.572 [2024-10-07 05:41:29.457310] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00 00:22:25.572 05:41:29 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:25.572 05:41:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:25.572 05:41:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:25.572 05:41:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:25.572 05:41:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:25.572 05:41:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:25.572 05:41:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:25.572 05:41:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:25.572 05:41:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:25.572 05:41:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:25.572 05:41:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.572 05:41:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.831 05:41:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:25.831 "name": "raid_bdev1", 00:22:25.831 "uuid": "cabef824-4e42-4088-b8f4-6dd1df3a8d9d", 00:22:25.831 "strip_size_kb": 0, 00:22:25.831 "state": "online", 00:22:25.831 "raid_level": "raid1", 00:22:25.831 "superblock": false, 00:22:25.831 "num_base_bdevs": 4, 00:22:25.831 "num_base_bdevs_discovered": 3, 00:22:25.831 "num_base_bdevs_operational": 3, 00:22:25.831 "base_bdevs_list": [ 00:22:25.831 { 00:22:25.831 "name": null, 00:22:25.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.831 "is_configured": false, 00:22:25.831 "data_offset": 0, 00:22:25.831 "data_size": 65536 00:22:25.831 }, 00:22:25.831 { 00:22:25.831 "name": "BaseBdev2", 00:22:25.831 "uuid": "d000cc12-bffa-422f-889d-97858689f5e5", 00:22:25.831 "is_configured": true, 00:22:25.831 "data_offset": 0, 00:22:25.831 "data_size": 65536 00:22:25.831 }, 00:22:25.831 { 00:22:25.831 "name": "BaseBdev3", 00:22:25.831 "uuid": "04dc0997-10f9-44b0-9c5f-dc72d5cc8663", 00:22:25.831 "is_configured": true, 00:22:25.831 "data_offset": 0, 00:22:25.831 "data_size": 65536 00:22:25.831 }, 00:22:25.831 { 00:22:25.831 "name": "BaseBdev4", 00:22:25.831 "uuid": "b5538130-3eba-4b62-8c13-de59e8d09ca1", 00:22:25.831 "is_configured": true, 00:22:25.831 "data_offset": 0, 00:22:25.831 "data_size": 65536 00:22:25.831 } 00:22:25.831 ] 00:22:25.831 }' 00:22:25.831 05:41:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:25.831 05:41:29 -- common/autotest_common.sh@10 -- # set +x 00:22:26.400 05:41:30 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:26.659 [2024-10-07 05:41:30.513105] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:26.659 [2024-10-07 05:41:30.513513] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:26.659 [2024-10-07 05:41:30.550843] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:22:26.659 05:41:30 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:26.659 [2024-10-07 05:41:30.553232] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:26.918 [2024-10-07 05:41:30.680359] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:26.918 [2024-10-07 05:41:30.807056] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:26.918 [2024-10-07 05:41:30.807601] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:27.178 [2024-10-07 05:41:31.042426] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:27.178 [2024-10-07 05:41:31.145784] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:27.178 [2024-10-07 05:41:31.146175] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:27.745 [2024-10-07 05:41:31.517108] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:27.745 05:41:31 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:27.745 05:41:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:27.745 05:41:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:27.745 05:41:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:27.745 05:41:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:27.745 05:41:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.745 05:41:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.004 05:41:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:28.004 "name": "raid_bdev1", 00:22:28.004 "uuid": "cabef824-4e42-4088-b8f4-6dd1df3a8d9d", 00:22:28.004 "strip_size_kb": 0, 00:22:28.004 "state": "online", 00:22:28.004 "raid_level": "raid1", 00:22:28.004 "superblock": false, 00:22:28.004 "num_base_bdevs": 4, 00:22:28.004 "num_base_bdevs_discovered": 4, 00:22:28.004 "num_base_bdevs_operational": 4, 00:22:28.004 "process": { 00:22:28.004 "type": "rebuild", 00:22:28.004 "target": "spare", 00:22:28.004 "progress": { 00:22:28.004 "blocks": 18432, 00:22:28.004 "percent": 28 00:22:28.004 } 00:22:28.004 }, 00:22:28.004 "base_bdevs_list": [ 00:22:28.004 { 00:22:28.004 "name": "spare", 00:22:28.004 "uuid": "17694828-ebf6-5a48-a13c-508244c2fdbb", 00:22:28.004 "is_configured": true, 00:22:28.004 "data_offset": 0, 00:22:28.004 "data_size": 65536 00:22:28.004 }, 00:22:28.004 { 00:22:28.004 "name": "BaseBdev2", 00:22:28.004 "uuid": "d000cc12-bffa-422f-889d-97858689f5e5", 00:22:28.004 "is_configured": true, 00:22:28.004 "data_offset": 0, 00:22:28.004 "data_size": 65536 00:22:28.004 }, 00:22:28.004 { 00:22:28.004 "name": "BaseBdev3", 00:22:28.004 "uuid": "04dc0997-10f9-44b0-9c5f-dc72d5cc8663", 00:22:28.004 "is_configured": true, 00:22:28.004 "data_offset": 0, 00:22:28.004 "data_size": 65536 00:22:28.004 }, 00:22:28.004 { 00:22:28.004 "name": "BaseBdev4", 00:22:28.004 "uuid": "b5538130-3eba-4b62-8c13-de59e8d09ca1", 00:22:28.004 "is_configured": true, 00:22:28.004 "data_offset": 0, 00:22:28.004 "data_size": 65536 00:22:28.004 } 00:22:28.004 ] 00:22:28.004 }' 00:22:28.004 [2024-10-07 05:41:31.749153] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:28.004 [2024-10-07 05:41:31.749915] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:28.004 05:41:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:28.004 05:41:31 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:28.004 05:41:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:28.004 05:41:31 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:28.004 05:41:31 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:28.004 [2024-10-07 05:41:31.953831] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:28.264 [2024-10-07 05:41:32.053683] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:28.264 [2024-10-07 05:41:32.075335] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:28.264 [2024-10-07 05:41:32.078783] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:28.264 [2024-10-07 05:41:32.113056] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00 00:22:28.264 05:41:32 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:28.264 05:41:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:28.264 05:41:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:28.264 05:41:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:28.264 05:41:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:28.264 05:41:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:28.264 05:41:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:28.264 05:41:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:28.264 05:41:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:28.264 05:41:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:28.264 05:41:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.264 05:41:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.523 05:41:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:28.523 "name": "raid_bdev1", 00:22:28.523 "uuid": "cabef824-4e42-4088-b8f4-6dd1df3a8d9d", 00:22:28.523 "strip_size_kb": 0, 00:22:28.523 "state": "online", 00:22:28.523 "raid_level": "raid1", 00:22:28.523 "superblock": false, 00:22:28.523 "num_base_bdevs": 4, 00:22:28.523 "num_base_bdevs_discovered": 3, 00:22:28.523 "num_base_bdevs_operational": 3, 00:22:28.523 "base_bdevs_list": [ 00:22:28.523 { 00:22:28.523 "name": null, 00:22:28.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.523 "is_configured": false, 00:22:28.523 "data_offset": 0, 00:22:28.523 "data_size": 65536 00:22:28.523 }, 00:22:28.523 { 00:22:28.523 "name": "BaseBdev2", 00:22:28.523 "uuid": "d000cc12-bffa-422f-889d-97858689f5e5", 00:22:28.523 "is_configured": true, 00:22:28.523 "data_offset": 0, 00:22:28.523 "data_size": 65536 00:22:28.523 }, 00:22:28.523 { 00:22:28.523 "name": "BaseBdev3", 00:22:28.523 "uuid": "04dc0997-10f9-44b0-9c5f-dc72d5cc8663", 00:22:28.523 "is_configured": true, 00:22:28.523 "data_offset": 0, 00:22:28.523 "data_size": 65536 00:22:28.523 }, 00:22:28.523 { 00:22:28.523 "name": "BaseBdev4", 00:22:28.523 "uuid": "b5538130-3eba-4b62-8c13-de59e8d09ca1", 00:22:28.523 "is_configured": true, 00:22:28.523 "data_offset": 0, 00:22:28.523 "data_size": 65536 00:22:28.523 } 00:22:28.523 ] 00:22:28.523 }' 00:22:28.523 05:41:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:28.523 05:41:32 -- common/autotest_common.sh@10 -- # set +x 00:22:29.090 05:41:32 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:29.090 05:41:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:29.090 05:41:32 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:29.090 05:41:32 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:29.090 05:41:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:29.090 05:41:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.090 05:41:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.350 05:41:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:29.350 "name": "raid_bdev1", 00:22:29.350 "uuid": "cabef824-4e42-4088-b8f4-6dd1df3a8d9d", 00:22:29.350 "strip_size_kb": 0, 00:22:29.350 "state": "online", 00:22:29.350 "raid_level": "raid1", 00:22:29.350 "superblock": false, 00:22:29.350 "num_base_bdevs": 4, 00:22:29.350 "num_base_bdevs_discovered": 3, 00:22:29.350 "num_base_bdevs_operational": 3, 00:22:29.350 "base_bdevs_list": [ 00:22:29.350 { 00:22:29.350 "name": null, 00:22:29.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.350 "is_configured": false, 00:22:29.350 "data_offset": 0, 00:22:29.350 "data_size": 65536 00:22:29.350 }, 00:22:29.350 { 00:22:29.350 "name": "BaseBdev2", 00:22:29.350 "uuid": "d000cc12-bffa-422f-889d-97858689f5e5", 00:22:29.350 "is_configured": true, 00:22:29.350 "data_offset": 0, 00:22:29.350 "data_size": 65536 00:22:29.350 }, 00:22:29.350 { 00:22:29.350 "name": "BaseBdev3", 00:22:29.350 "uuid": "04dc0997-10f9-44b0-9c5f-dc72d5cc8663", 00:22:29.350 "is_configured": true, 00:22:29.350 "data_offset": 0, 00:22:29.350 "data_size": 65536 00:22:29.350 }, 00:22:29.350 { 00:22:29.350 "name": "BaseBdev4", 00:22:29.350 "uuid": "b5538130-3eba-4b62-8c13-de59e8d09ca1", 00:22:29.350 "is_configured": true, 00:22:29.350 "data_offset": 0, 00:22:29.350 "data_size": 65536 00:22:29.350 } 00:22:29.350 ] 00:22:29.350 }' 00:22:29.350 05:41:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:29.350 05:41:33 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:29.350 05:41:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:29.350 05:41:33 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:29.350 05:41:33 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:29.609 [2024-10-07 05:41:33.440970] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:29.609 [2024-10-07 05:41:33.441363] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:29.609 05:41:33 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:29.609 [2024-10-07 05:41:33.478290] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:22:29.609 [2024-10-07 05:41:33.480739] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:29.868 [2024-10-07 05:41:33.593387] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:29.868 [2024-10-07 05:41:33.594682] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:29.868 [2024-10-07 05:41:33.841399] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:29.868 [2024-10-07 05:41:33.842476] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:30.435 [2024-10-07 05:41:34.202151] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:30.435 [2024-10-07 05:41:34.324929] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:30.435 [2024-10-07 05:41:34.326054] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:30.694 05:41:34 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:30.694 05:41:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:30.694 05:41:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:30.694 05:41:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:30.694 05:41:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:30.694 05:41:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.694 05:41:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.952 [2024-10-07 05:41:34.689719] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:30.952 05:41:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:30.952 "name": "raid_bdev1", 00:22:30.952 "uuid": "cabef824-4e42-4088-b8f4-6dd1df3a8d9d", 00:22:30.952 "strip_size_kb": 0, 00:22:30.952 "state": "online", 00:22:30.952 "raid_level": "raid1", 00:22:30.952 "superblock": false, 00:22:30.952 "num_base_bdevs": 4, 00:22:30.952 "num_base_bdevs_discovered": 4, 00:22:30.952 "num_base_bdevs_operational": 4, 00:22:30.952 "process": { 00:22:30.952 "type": "rebuild", 00:22:30.952 "target": "spare", 00:22:30.952 "progress": { 00:22:30.952 "blocks": 12288, 00:22:30.952 "percent": 18 00:22:30.952 } 00:22:30.952 }, 00:22:30.952 "base_bdevs_list": [ 00:22:30.952 { 00:22:30.952 "name": "spare", 00:22:30.953 "uuid": "17694828-ebf6-5a48-a13c-508244c2fdbb", 00:22:30.953 "is_configured": true, 00:22:30.953 "data_offset": 0, 00:22:30.953 "data_size": 65536 00:22:30.953 }, 00:22:30.953 { 00:22:30.953 "name": "BaseBdev2", 00:22:30.953 "uuid": "d000cc12-bffa-422f-889d-97858689f5e5", 00:22:30.953 "is_configured": true, 00:22:30.953 "data_offset": 0, 00:22:30.953 "data_size": 65536 00:22:30.953 }, 00:22:30.953 { 00:22:30.953 "name": "BaseBdev3", 00:22:30.953 "uuid": "04dc0997-10f9-44b0-9c5f-dc72d5cc8663", 00:22:30.953 "is_configured": true, 00:22:30.953 "data_offset": 0, 00:22:30.953 "data_size": 65536 00:22:30.953 }, 00:22:30.953 { 00:22:30.953 "name": "BaseBdev4", 00:22:30.953 "uuid": "b5538130-3eba-4b62-8c13-de59e8d09ca1", 00:22:30.953 "is_configured": true, 00:22:30.953 "data_offset": 0, 00:22:30.953 "data_size": 65536 00:22:30.953 } 00:22:30.953 ] 00:22:30.953 }' 00:22:30.953 05:41:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:30.953 05:41:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:30.953 05:41:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:30.953 05:41:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:30.953 05:41:34 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:22:30.953 05:41:34 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:30.953 05:41:34 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:30.953 05:41:34 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:30.953 05:41:34 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:30.953 [2024-10-07 05:41:34.915397] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:30.953 [2024-10-07 05:41:34.916381] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:31.212 [2024-10-07 05:41:35.048597] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:31.212 [2024-10-07 05:41:35.146150] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005a00 00:22:31.212 [2024-10-07 05:41:35.146304] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005c70 00:22:31.212 05:41:35 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:31.212 05:41:35 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:31.212 05:41:35 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:31.212 05:41:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:31.212 05:41:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:31.212 05:41:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:31.212 05:41:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:31.212 05:41:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.212 05:41:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.471 [2024-10-07 05:41:35.271038] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:31.471 05:41:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:31.471 "name": "raid_bdev1", 00:22:31.471 "uuid": "cabef824-4e42-4088-b8f4-6dd1df3a8d9d", 00:22:31.471 "strip_size_kb": 0, 00:22:31.471 "state": "online", 00:22:31.471 "raid_level": "raid1", 00:22:31.471 "superblock": false, 00:22:31.471 "num_base_bdevs": 4, 00:22:31.471 "num_base_bdevs_discovered": 3, 00:22:31.471 "num_base_bdevs_operational": 3, 00:22:31.471 "process": { 00:22:31.471 "type": "rebuild", 00:22:31.471 "target": "spare", 00:22:31.471 "progress": { 00:22:31.471 "blocks": 20480, 00:22:31.471 "percent": 31 00:22:31.471 } 00:22:31.471 }, 00:22:31.471 "base_bdevs_list": [ 00:22:31.471 { 00:22:31.471 "name": "spare", 00:22:31.471 "uuid": "17694828-ebf6-5a48-a13c-508244c2fdbb", 00:22:31.471 "is_configured": true, 00:22:31.471 "data_offset": 0, 00:22:31.471 "data_size": 65536 00:22:31.471 }, 00:22:31.471 { 00:22:31.471 "name": null, 00:22:31.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.471 "is_configured": false, 00:22:31.471 "data_offset": 0, 00:22:31.471 "data_size": 65536 00:22:31.471 }, 00:22:31.471 { 00:22:31.471 "name": "BaseBdev3", 00:22:31.471 "uuid": "04dc0997-10f9-44b0-9c5f-dc72d5cc8663", 00:22:31.471 "is_configured": true, 00:22:31.471 "data_offset": 0, 00:22:31.471 "data_size": 65536 00:22:31.471 }, 00:22:31.471 { 00:22:31.471 "name": "BaseBdev4", 00:22:31.471 "uuid": "b5538130-3eba-4b62-8c13-de59e8d09ca1", 00:22:31.471 "is_configured": true, 00:22:31.471 "data_offset": 0, 00:22:31.471 "data_size": 65536 00:22:31.471 } 00:22:31.471 ] 00:22:31.471 }' 00:22:31.471 05:41:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:31.731 05:41:35 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:31.731 05:41:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:31.731 [2024-10-07 05:41:35.502153] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:31.731 05:41:35 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:31.731 05:41:35 -- bdev/bdev_raid.sh@657 -- # local timeout=535 00:22:31.731 05:41:35 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:31.731 05:41:35 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:31.731 05:41:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:31.731 05:41:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:31.731 05:41:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:31.731 05:41:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:31.731 05:41:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.731 05:41:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.990 05:41:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:31.990 "name": "raid_bdev1", 00:22:31.990 "uuid": "cabef824-4e42-4088-b8f4-6dd1df3a8d9d", 00:22:31.990 "strip_size_kb": 0, 00:22:31.990 "state": "online", 00:22:31.990 "raid_level": "raid1", 00:22:31.990 "superblock": false, 00:22:31.990 "num_base_bdevs": 4, 00:22:31.990 "num_base_bdevs_discovered": 3, 00:22:31.990 "num_base_bdevs_operational": 3, 00:22:31.990 "process": { 00:22:31.990 "type": "rebuild", 00:22:31.990 "target": "spare", 00:22:31.990 "progress": { 00:22:31.990 "blocks": 24576, 00:22:31.990 "percent": 37 00:22:31.990 } 00:22:31.990 }, 00:22:31.990 "base_bdevs_list": [ 00:22:31.990 { 00:22:31.990 "name": "spare", 00:22:31.990 "uuid": "17694828-ebf6-5a48-a13c-508244c2fdbb", 00:22:31.990 "is_configured": true, 00:22:31.990 "data_offset": 0, 00:22:31.990 "data_size": 65536 00:22:31.990 }, 00:22:31.990 { 00:22:31.990 "name": null, 00:22:31.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.990 "is_configured": false, 00:22:31.990 "data_offset": 0, 00:22:31.990 "data_size": 65536 00:22:31.990 }, 00:22:31.990 { 00:22:31.990 "name": "BaseBdev3", 00:22:31.990 "uuid": "04dc0997-10f9-44b0-9c5f-dc72d5cc8663", 00:22:31.990 "is_configured": true, 00:22:31.990 "data_offset": 0, 00:22:31.990 "data_size": 65536 00:22:31.990 }, 00:22:31.990 { 00:22:31.990 "name": "BaseBdev4", 00:22:31.990 "uuid": "b5538130-3eba-4b62-8c13-de59e8d09ca1", 00:22:31.990 "is_configured": true, 00:22:31.990 "data_offset": 0, 00:22:31.990 "data_size": 65536 00:22:31.990 } 00:22:31.990 ] 00:22:31.990 }' 00:22:31.990 05:41:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:31.990 05:41:35 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:31.990 05:41:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:31.990 05:41:35 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:31.990 05:41:35 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:32.249 [2024-10-07 05:41:36.081206] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:22:32.507 [2024-10-07 05:41:36.415094] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:22:32.765 [2024-10-07 05:41:36.631783] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:22:33.023 [2024-10-07 05:41:36.867064] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:22:33.023 05:41:36 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:33.023 05:41:36 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:33.023 05:41:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:33.023 05:41:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:33.023 05:41:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:33.023 05:41:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:33.023 05:41:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.023 05:41:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.281 [2024-10-07 05:41:37.092910] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:22:33.281 05:41:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:33.281 "name": "raid_bdev1", 00:22:33.281 "uuid": "cabef824-4e42-4088-b8f4-6dd1df3a8d9d", 00:22:33.281 "strip_size_kb": 0, 00:22:33.281 "state": "online", 00:22:33.281 "raid_level": "raid1", 00:22:33.281 "superblock": false, 00:22:33.281 "num_base_bdevs": 4, 00:22:33.281 "num_base_bdevs_discovered": 3, 00:22:33.281 "num_base_bdevs_operational": 3, 00:22:33.281 "process": { 00:22:33.281 "type": "rebuild", 00:22:33.281 "target": "spare", 00:22:33.281 "progress": { 00:22:33.281 "blocks": 47104, 00:22:33.281 "percent": 71 00:22:33.281 } 00:22:33.281 }, 00:22:33.281 "base_bdevs_list": [ 00:22:33.281 { 00:22:33.281 "name": "spare", 00:22:33.281 "uuid": "17694828-ebf6-5a48-a13c-508244c2fdbb", 00:22:33.281 "is_configured": true, 00:22:33.281 "data_offset": 0, 00:22:33.281 "data_size": 65536 00:22:33.281 }, 00:22:33.281 { 00:22:33.281 "name": null, 00:22:33.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.281 "is_configured": false, 00:22:33.282 "data_offset": 0, 00:22:33.282 "data_size": 65536 00:22:33.282 }, 00:22:33.282 { 00:22:33.282 "name": "BaseBdev3", 00:22:33.282 "uuid": "04dc0997-10f9-44b0-9c5f-dc72d5cc8663", 00:22:33.282 "is_configured": true, 00:22:33.282 "data_offset": 0, 00:22:33.282 "data_size": 65536 00:22:33.282 }, 00:22:33.282 { 00:22:33.282 "name": "BaseBdev4", 00:22:33.282 "uuid": "b5538130-3eba-4b62-8c13-de59e8d09ca1", 00:22:33.282 "is_configured": true, 00:22:33.282 "data_offset": 0, 00:22:33.282 "data_size": 65536 00:22:33.282 } 00:22:33.282 ] 00:22:33.282 }' 00:22:33.282 05:41:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:33.282 05:41:37 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:33.282 05:41:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:33.282 05:41:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:33.282 05:41:37 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:33.540 [2024-10-07 05:41:37.417503] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:22:34.486 [2024-10-07 05:41:38.095228] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:34.486 [2024-10-07 05:41:38.195231] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:34.486 [2024-10-07 05:41:38.197575] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:34.486 05:41:38 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:34.486 05:41:38 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:34.486 05:41:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:34.486 05:41:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:34.486 05:41:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:34.486 05:41:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:34.486 05:41:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.486 05:41:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.753 05:41:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:34.753 "name": "raid_bdev1", 00:22:34.753 "uuid": "cabef824-4e42-4088-b8f4-6dd1df3a8d9d", 00:22:34.753 "strip_size_kb": 0, 00:22:34.753 "state": "online", 00:22:34.753 "raid_level": "raid1", 00:22:34.753 "superblock": false, 00:22:34.753 "num_base_bdevs": 4, 00:22:34.753 "num_base_bdevs_discovered": 3, 00:22:34.753 "num_base_bdevs_operational": 3, 00:22:34.753 "base_bdevs_list": [ 00:22:34.753 { 00:22:34.753 "name": "spare", 00:22:34.753 "uuid": "17694828-ebf6-5a48-a13c-508244c2fdbb", 00:22:34.753 "is_configured": true, 00:22:34.753 "data_offset": 0, 00:22:34.753 "data_size": 65536 00:22:34.753 }, 00:22:34.753 { 00:22:34.753 "name": null, 00:22:34.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.753 "is_configured": false, 00:22:34.753 "data_offset": 0, 00:22:34.753 "data_size": 65536 00:22:34.753 }, 00:22:34.753 { 00:22:34.753 "name": "BaseBdev3", 00:22:34.753 "uuid": "04dc0997-10f9-44b0-9c5f-dc72d5cc8663", 00:22:34.753 "is_configured": true, 00:22:34.753 "data_offset": 0, 00:22:34.753 "data_size": 65536 00:22:34.753 }, 00:22:34.753 { 00:22:34.753 "name": "BaseBdev4", 00:22:34.753 "uuid": "b5538130-3eba-4b62-8c13-de59e8d09ca1", 00:22:34.753 "is_configured": true, 00:22:34.753 "data_offset": 0, 00:22:34.753 "data_size": 65536 00:22:34.753 } 00:22:34.753 ] 00:22:34.753 }' 00:22:34.753 05:41:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:34.753 05:41:38 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:34.753 05:41:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:34.753 05:41:38 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:34.753 05:41:38 -- bdev/bdev_raid.sh@660 -- # break 00:22:34.753 05:41:38 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:34.753 05:41:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:34.754 05:41:38 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:34.754 05:41:38 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:34.754 05:41:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:34.754 05:41:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.754 05:41:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.012 05:41:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:35.012 "name": "raid_bdev1", 00:22:35.012 "uuid": "cabef824-4e42-4088-b8f4-6dd1df3a8d9d", 00:22:35.012 "strip_size_kb": 0, 00:22:35.012 "state": "online", 00:22:35.012 "raid_level": "raid1", 00:22:35.012 "superblock": false, 00:22:35.012 "num_base_bdevs": 4, 00:22:35.012 "num_base_bdevs_discovered": 3, 00:22:35.012 "num_base_bdevs_operational": 3, 00:22:35.012 "base_bdevs_list": [ 00:22:35.012 { 00:22:35.012 "name": "spare", 00:22:35.012 "uuid": "17694828-ebf6-5a48-a13c-508244c2fdbb", 00:22:35.012 "is_configured": true, 00:22:35.012 "data_offset": 0, 00:22:35.012 "data_size": 65536 00:22:35.012 }, 00:22:35.012 { 00:22:35.012 "name": null, 00:22:35.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.012 "is_configured": false, 00:22:35.012 "data_offset": 0, 00:22:35.012 "data_size": 65536 00:22:35.012 }, 00:22:35.012 { 00:22:35.012 "name": "BaseBdev3", 00:22:35.012 "uuid": "04dc0997-10f9-44b0-9c5f-dc72d5cc8663", 00:22:35.012 "is_configured": true, 00:22:35.012 "data_offset": 0, 00:22:35.012 "data_size": 65536 00:22:35.012 }, 00:22:35.012 { 00:22:35.012 "name": "BaseBdev4", 00:22:35.012 "uuid": "b5538130-3eba-4b62-8c13-de59e8d09ca1", 00:22:35.012 "is_configured": true, 00:22:35.012 "data_offset": 0, 00:22:35.012 "data_size": 65536 00:22:35.012 } 00:22:35.012 ] 00:22:35.012 }' 00:22:35.012 05:41:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:35.012 05:41:38 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:35.012 05:41:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:35.012 05:41:38 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:35.012 05:41:38 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:35.012 05:41:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:35.012 05:41:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:35.012 05:41:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:35.012 05:41:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:35.012 05:41:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:35.012 05:41:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:35.012 05:41:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:35.012 05:41:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:35.012 05:41:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:35.012 05:41:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.012 05:41:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.271 05:41:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:35.271 "name": "raid_bdev1", 00:22:35.271 "uuid": "cabef824-4e42-4088-b8f4-6dd1df3a8d9d", 00:22:35.271 "strip_size_kb": 0, 00:22:35.271 "state": "online", 00:22:35.271 "raid_level": "raid1", 00:22:35.271 "superblock": false, 00:22:35.271 "num_base_bdevs": 4, 00:22:35.271 "num_base_bdevs_discovered": 3, 00:22:35.271 "num_base_bdevs_operational": 3, 00:22:35.271 "base_bdevs_list": [ 00:22:35.271 { 00:22:35.271 "name": "spare", 00:22:35.271 "uuid": "17694828-ebf6-5a48-a13c-508244c2fdbb", 00:22:35.271 "is_configured": true, 00:22:35.271 "data_offset": 0, 00:22:35.271 "data_size": 65536 00:22:35.271 }, 00:22:35.271 { 00:22:35.271 "name": null, 00:22:35.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.271 "is_configured": false, 00:22:35.271 "data_offset": 0, 00:22:35.271 "data_size": 65536 00:22:35.271 }, 00:22:35.271 { 00:22:35.271 "name": "BaseBdev3", 00:22:35.271 "uuid": "04dc0997-10f9-44b0-9c5f-dc72d5cc8663", 00:22:35.271 "is_configured": true, 00:22:35.271 "data_offset": 0, 00:22:35.271 "data_size": 65536 00:22:35.271 }, 00:22:35.271 { 00:22:35.271 "name": "BaseBdev4", 00:22:35.271 "uuid": "b5538130-3eba-4b62-8c13-de59e8d09ca1", 00:22:35.271 "is_configured": true, 00:22:35.271 "data_offset": 0, 00:22:35.271 "data_size": 65536 00:22:35.271 } 00:22:35.271 ] 00:22:35.271 }' 00:22:35.271 05:41:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:35.271 05:41:39 -- common/autotest_common.sh@10 -- # set +x 00:22:36.206 05:41:39 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:36.206 [2024-10-07 05:41:40.029078] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:36.206 [2024-10-07 05:41:40.029375] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:36.206 00:22:36.206 Latency(us) 00:22:36.206 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.206 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:36.206 raid_bdev1 : 10.73 101.93 305.78 0.00 0.00 14020.77 290.44 117726.49 00:22:36.206 =================================================================================================================== 00:22:36.206 Total : 101.93 305.78 0.00 0.00 14020.77 290.44 117726.49 00:22:36.206 [2024-10-07 05:41:40.144388] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:36.206 [2024-10-07 05:41:40.144546] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:36.206 [2024-10-07 05:41:40.144674] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:36.206 0 00:22:36.206 [2024-10-07 05:41:40.144970] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:22:36.206 05:41:40 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.206 05:41:40 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:36.465 05:41:40 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:36.465 05:41:40 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:22:36.465 05:41:40 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:36.465 05:41:40 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:36.465 05:41:40 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:22:36.465 05:41:40 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:36.465 05:41:40 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:36.465 05:41:40 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:36.465 05:41:40 -- bdev/nbd_common.sh@12 -- # local i 00:22:36.465 05:41:40 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:36.465 05:41:40 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:36.465 05:41:40 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:36.724 /dev/nbd0 00:22:36.724 05:41:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:36.724 05:41:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:36.724 05:41:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:36.724 05:41:40 -- common/autotest_common.sh@857 -- # local i 00:22:36.724 05:41:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:36.724 05:41:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:36.724 05:41:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:36.724 05:41:40 -- common/autotest_common.sh@861 -- # break 00:22:36.724 05:41:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:36.724 05:41:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:36.724 05:41:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:36.724 1+0 records in 00:22:36.724 1+0 records out 00:22:36.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178554 s, 22.9 MB/s 00:22:36.724 05:41:40 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:36.983 05:41:40 -- common/autotest_common.sh@874 -- # size=4096 00:22:36.983 05:41:40 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:36.983 05:41:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:36.983 05:41:40 -- common/autotest_common.sh@877 -- # return 0 00:22:36.983 05:41:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:36.983 05:41:40 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:36.983 05:41:40 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:36.983 05:41:40 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:22:36.983 05:41:40 -- bdev/bdev_raid.sh@678 -- # continue 00:22:36.983 05:41:40 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:36.983 05:41:40 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:22:36.983 05:41:40 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:22:36.983 05:41:40 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:36.983 05:41:40 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:22:36.983 05:41:40 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:36.983 05:41:40 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:36.983 05:41:40 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:36.983 05:41:40 -- bdev/nbd_common.sh@12 -- # local i 00:22:36.983 05:41:40 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:36.983 05:41:40 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:36.983 05:41:40 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:22:36.983 /dev/nbd1 00:22:36.983 05:41:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:36.983 05:41:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:36.983 05:41:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:36.983 05:41:40 -- common/autotest_common.sh@857 -- # local i 00:22:36.983 05:41:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:36.983 05:41:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:36.983 05:41:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:36.983 05:41:40 -- common/autotest_common.sh@861 -- # break 00:22:36.983 05:41:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:36.983 05:41:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:36.983 05:41:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:36.983 1+0 records in 00:22:36.983 1+0 records out 00:22:36.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333117 s, 12.3 MB/s 00:22:36.983 05:41:40 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:36.983 05:41:40 -- common/autotest_common.sh@874 -- # size=4096 00:22:36.983 05:41:40 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:36.983 05:41:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:36.983 05:41:40 -- common/autotest_common.sh@877 -- # return 0 00:22:36.983 05:41:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:36.983 05:41:40 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:36.983 05:41:40 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:37.243 05:41:41 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:37.243 05:41:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:37.243 05:41:41 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:37.243 05:41:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:37.243 05:41:41 -- bdev/nbd_common.sh@51 -- # local i 00:22:37.243 05:41:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:37.243 05:41:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:37.502 05:41:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:37.502 05:41:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:37.502 05:41:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:37.502 05:41:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:37.502 05:41:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:37.502 05:41:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:37.502 05:41:41 -- bdev/nbd_common.sh@41 -- # break 00:22:37.502 05:41:41 -- bdev/nbd_common.sh@45 -- # return 0 00:22:37.502 05:41:41 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:37.502 05:41:41 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:22:37.502 05:41:41 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:22:37.502 05:41:41 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:37.502 05:41:41 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:22:37.502 05:41:41 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:37.502 05:41:41 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:37.502 05:41:41 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:37.502 05:41:41 -- bdev/nbd_common.sh@12 -- # local i 00:22:37.502 05:41:41 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:37.502 05:41:41 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:37.502 05:41:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:22:37.761 /dev/nbd1 00:22:37.761 05:41:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:37.761 05:41:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:37.761 05:41:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:37.761 05:41:41 -- common/autotest_common.sh@857 -- # local i 00:22:37.761 05:41:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:37.761 05:41:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:37.761 05:41:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:37.761 05:41:41 -- common/autotest_common.sh@861 -- # break 00:22:37.761 05:41:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:37.761 05:41:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:37.761 05:41:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:37.761 1+0 records in 00:22:37.761 1+0 records out 00:22:37.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269879 s, 15.2 MB/s 00:22:37.761 05:41:41 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:37.761 05:41:41 -- common/autotest_common.sh@874 -- # size=4096 00:22:37.761 05:41:41 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:37.761 05:41:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:37.761 05:41:41 -- common/autotest_common.sh@877 -- # return 0 00:22:37.761 05:41:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:37.761 05:41:41 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:37.761 05:41:41 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:37.761 05:41:41 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:37.761 05:41:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:37.761 05:41:41 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:37.761 05:41:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:37.761 05:41:41 -- bdev/nbd_common.sh@51 -- # local i 00:22:37.761 05:41:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:37.761 05:41:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:38.020 05:41:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:38.020 05:41:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:38.020 05:41:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:38.020 05:41:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:38.020 05:41:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:38.020 05:41:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:38.020 05:41:41 -- bdev/nbd_common.sh@41 -- # break 00:22:38.020 05:41:41 -- bdev/nbd_common.sh@45 -- # return 0 00:22:38.020 05:41:41 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:38.020 05:41:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:38.020 05:41:41 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:38.020 05:41:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:38.020 05:41:41 -- bdev/nbd_common.sh@51 -- # local i 00:22:38.020 05:41:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:38.020 05:41:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:38.279 05:41:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:38.279 05:41:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:38.279 05:41:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:38.279 05:41:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:38.279 05:41:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:38.279 05:41:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:38.279 05:41:42 -- bdev/nbd_common.sh@41 -- # break 00:22:38.279 05:41:42 -- bdev/nbd_common.sh@45 -- # return 0 00:22:38.279 05:41:42 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:22:38.279 05:41:42 -- bdev/bdev_raid.sh@709 -- # killprocess 169533 00:22:38.279 05:41:42 -- common/autotest_common.sh@926 -- # '[' -z 169533 ']' 00:22:38.279 05:41:42 -- common/autotest_common.sh@930 -- # kill -0 169533 00:22:38.279 05:41:42 -- common/autotest_common.sh@931 -- # uname 00:22:38.279 05:41:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:38.279 05:41:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 169533 00:22:38.279 05:41:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:38.279 05:41:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:38.279 05:41:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 169533' 00:22:38.279 killing process with pid 169533 00:22:38.279 05:41:42 -- common/autotest_common.sh@945 -- # kill 169533 00:22:38.279 Received shutdown signal, test time was about 12.855590 seconds 00:22:38.279 00:22:38.279 Latency(us) 00:22:38.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.279 =================================================================================================================== 00:22:38.279 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:38.279 [2024-10-07 05:41:42.251176] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:38.279 05:41:42 -- common/autotest_common.sh@950 -- # wait 169533 00:22:38.847 [2024-10-07 05:41:42.543644] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:39.786 00:22:39.786 real 0m18.530s 00:22:39.786 user 0m28.515s 00:22:39.786 sys 0m2.398s 00:22:39.786 05:41:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:39.786 ************************************ 00:22:39.786 END TEST raid_rebuild_test_io 00:22:39.786 ************************************ 00:22:39.786 05:41:43 -- common/autotest_common.sh@10 -- # set +x 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true 00:22:39.786 05:41:43 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:22:39.786 05:41:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:39.786 05:41:43 -- common/autotest_common.sh@10 -- # set +x 00:22:39.786 ************************************ 00:22:39.786 START TEST raid_rebuild_test_sb_io 00:22:39.786 ************************************ 00:22:39.786 05:41:43 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true true 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@544 -- # raid_pid=170042 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@545 -- # waitforlisten 170042 /var/tmp/spdk-raid.sock 00:22:39.786 05:41:43 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:39.786 05:41:43 -- common/autotest_common.sh@819 -- # '[' -z 170042 ']' 00:22:39.786 05:41:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:39.786 05:41:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:39.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:39.786 05:41:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:39.786 05:41:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:39.786 05:41:43 -- common/autotest_common.sh@10 -- # set +x 00:22:39.786 [2024-10-07 05:41:43.756158] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:22:39.786 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:39.786 Zero copy mechanism will not be used. 00:22:39.786 [2024-10-07 05:41:43.756346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170042 ] 00:22:40.103 [2024-10-07 05:41:43.928365] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.361 [2024-10-07 05:41:44.118868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.361 [2024-10-07 05:41:44.307847] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:40.936 05:41:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:40.936 05:41:44 -- common/autotest_common.sh@852 -- # return 0 00:22:40.936 05:41:44 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:40.936 05:41:44 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:40.937 05:41:44 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:40.937 BaseBdev1_malloc 00:22:40.937 05:41:44 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:41.195 [2024-10-07 05:41:45.103318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:41.195 [2024-10-07 05:41:45.103414] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:41.195 [2024-10-07 05:41:45.103456] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:22:41.195 [2024-10-07 05:41:45.103504] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:41.195 [2024-10-07 05:41:45.105763] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:41.195 [2024-10-07 05:41:45.105812] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:41.195 BaseBdev1 00:22:41.195 05:41:45 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:41.195 05:41:45 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:41.195 05:41:45 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:41.454 BaseBdev2_malloc 00:22:41.454 05:41:45 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:41.712 [2024-10-07 05:41:45.582848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:41.712 [2024-10-07 05:41:45.582926] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:41.712 [2024-10-07 05:41:45.582973] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:22:41.712 [2024-10-07 05:41:45.583029] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:41.712 [2024-10-07 05:41:45.585237] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:41.712 [2024-10-07 05:41:45.585286] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:41.712 BaseBdev2 00:22:41.712 05:41:45 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:41.712 05:41:45 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:41.712 05:41:45 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:41.971 BaseBdev3_malloc 00:22:41.971 05:41:45 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:42.229 [2024-10-07 05:41:45.967747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:42.229 [2024-10-07 05:41:45.967815] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.229 [2024-10-07 05:41:45.967856] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:22:42.229 [2024-10-07 05:41:45.967906] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.229 [2024-10-07 05:41:45.970060] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.229 [2024-10-07 05:41:45.970114] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:42.229 BaseBdev3 00:22:42.229 05:41:45 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:42.229 05:41:45 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:42.229 05:41:45 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:42.229 BaseBdev4_malloc 00:22:42.229 05:41:46 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:42.487 [2024-10-07 05:41:46.369314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:42.487 [2024-10-07 05:41:46.369385] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.487 [2024-10-07 05:41:46.369421] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:22:42.487 [2024-10-07 05:41:46.369468] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.487 [2024-10-07 05:41:46.371841] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.487 [2024-10-07 05:41:46.371893] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:42.487 BaseBdev4 00:22:42.487 05:41:46 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:42.745 spare_malloc 00:22:42.745 05:41:46 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:43.005 spare_delay 00:22:43.005 05:41:46 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:43.263 [2024-10-07 05:41:46.998855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:43.263 [2024-10-07 05:41:46.998946] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:43.263 [2024-10-07 05:41:46.998988] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:22:43.263 [2024-10-07 05:41:46.999044] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:43.263 [2024-10-07 05:41:47.001461] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:43.263 [2024-10-07 05:41:47.001523] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:43.263 spare 00:22:43.263 05:41:47 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:43.263 [2024-10-07 05:41:47.170958] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:43.264 [2024-10-07 05:41:47.172939] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:43.264 [2024-10-07 05:41:47.173028] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:43.264 [2024-10-07 05:41:47.173086] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:43.264 [2024-10-07 05:41:47.173276] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:22:43.264 [2024-10-07 05:41:47.173292] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:43.264 [2024-10-07 05:41:47.173404] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:22:43.264 [2024-10-07 05:41:47.173746] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:22:43.264 [2024-10-07 05:41:47.173760] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:22:43.264 [2024-10-07 05:41:47.173896] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:43.264 05:41:47 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:43.264 05:41:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:43.264 05:41:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:43.264 05:41:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:43.264 05:41:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:43.264 05:41:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:43.264 05:41:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:43.264 05:41:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:43.264 05:41:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:43.264 05:41:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:43.264 05:41:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.264 05:41:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.522 05:41:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:43.522 "name": "raid_bdev1", 00:22:43.522 "uuid": "d384417d-5deb-4827-addb-7861a1e959c6", 00:22:43.522 "strip_size_kb": 0, 00:22:43.522 "state": "online", 00:22:43.522 "raid_level": "raid1", 00:22:43.522 "superblock": true, 00:22:43.522 "num_base_bdevs": 4, 00:22:43.522 "num_base_bdevs_discovered": 4, 00:22:43.522 "num_base_bdevs_operational": 4, 00:22:43.522 "base_bdevs_list": [ 00:22:43.522 { 00:22:43.522 "name": "BaseBdev1", 00:22:43.522 "uuid": "856a0586-ded9-50d9-8c36-4406657b59d1", 00:22:43.522 "is_configured": true, 00:22:43.522 "data_offset": 2048, 00:22:43.522 "data_size": 63488 00:22:43.522 }, 00:22:43.522 { 00:22:43.522 "name": "BaseBdev2", 00:22:43.522 "uuid": "623b226f-bee9-59ac-bd09-43aaa98f5b89", 00:22:43.522 "is_configured": true, 00:22:43.522 "data_offset": 2048, 00:22:43.522 "data_size": 63488 00:22:43.522 }, 00:22:43.522 { 00:22:43.522 "name": "BaseBdev3", 00:22:43.522 "uuid": "a630134b-c675-5508-8493-095c954c2eac", 00:22:43.522 "is_configured": true, 00:22:43.522 "data_offset": 2048, 00:22:43.522 "data_size": 63488 00:22:43.522 }, 00:22:43.522 { 00:22:43.522 "name": "BaseBdev4", 00:22:43.522 "uuid": "5366cf55-4c29-58d8-9038-2aa86554c861", 00:22:43.522 "is_configured": true, 00:22:43.522 "data_offset": 2048, 00:22:43.522 "data_size": 63488 00:22:43.522 } 00:22:43.522 ] 00:22:43.522 }' 00:22:43.522 05:41:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:43.522 05:41:47 -- common/autotest_common.sh@10 -- # set +x 00:22:44.088 05:41:47 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:44.089 05:41:47 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:44.347 [2024-10-07 05:41:48.235536] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:44.347 05:41:48 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:22:44.347 05:41:48 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.347 05:41:48 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:44.609 05:41:48 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:22:44.609 05:41:48 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:22:44.609 05:41:48 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:44.609 05:41:48 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:44.867 [2024-10-07 05:41:48.611110] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:44.867 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:44.867 Zero copy mechanism will not be used. 00:22:44.867 Running I/O for 60 seconds... 00:22:44.867 [2024-10-07 05:41:48.678567] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:44.867 [2024-10-07 05:41:48.684686] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:22:44.867 05:41:48 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:44.867 05:41:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:44.867 05:41:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:44.867 05:41:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:44.867 05:41:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:44.867 05:41:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:44.867 05:41:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:44.867 05:41:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:44.867 05:41:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:44.867 05:41:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:44.867 05:41:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.867 05:41:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.125 05:41:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:45.125 "name": "raid_bdev1", 00:22:45.125 "uuid": "d384417d-5deb-4827-addb-7861a1e959c6", 00:22:45.125 "strip_size_kb": 0, 00:22:45.125 "state": "online", 00:22:45.125 "raid_level": "raid1", 00:22:45.125 "superblock": true, 00:22:45.125 "num_base_bdevs": 4, 00:22:45.125 "num_base_bdevs_discovered": 3, 00:22:45.125 "num_base_bdevs_operational": 3, 00:22:45.125 "base_bdevs_list": [ 00:22:45.125 { 00:22:45.125 "name": null, 00:22:45.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.125 "is_configured": false, 00:22:45.125 "data_offset": 2048, 00:22:45.125 "data_size": 63488 00:22:45.125 }, 00:22:45.125 { 00:22:45.125 "name": "BaseBdev2", 00:22:45.125 "uuid": "623b226f-bee9-59ac-bd09-43aaa98f5b89", 00:22:45.125 "is_configured": true, 00:22:45.125 "data_offset": 2048, 00:22:45.125 "data_size": 63488 00:22:45.125 }, 00:22:45.125 { 00:22:45.125 "name": "BaseBdev3", 00:22:45.125 "uuid": "a630134b-c675-5508-8493-095c954c2eac", 00:22:45.125 "is_configured": true, 00:22:45.125 "data_offset": 2048, 00:22:45.125 "data_size": 63488 00:22:45.125 }, 00:22:45.125 { 00:22:45.125 "name": "BaseBdev4", 00:22:45.125 "uuid": "5366cf55-4c29-58d8-9038-2aa86554c861", 00:22:45.125 "is_configured": true, 00:22:45.125 "data_offset": 2048, 00:22:45.125 "data_size": 63488 00:22:45.125 } 00:22:45.125 ] 00:22:45.125 }' 00:22:45.125 05:41:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:45.125 05:41:48 -- common/autotest_common.sh@10 -- # set +x 00:22:45.691 05:41:49 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:45.949 [2024-10-07 05:41:49.819177] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:45.949 [2024-10-07 05:41:49.819278] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:45.949 05:41:49 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:45.949 [2024-10-07 05:41:49.864715] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:45.949 [2024-10-07 05:41:49.866785] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:46.207 [2024-10-07 05:41:49.984742] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:46.207 [2024-10-07 05:41:49.986436] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:46.464 [2024-10-07 05:41:50.211101] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:46.464 [2024-10-07 05:41:50.211754] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:46.722 [2024-10-07 05:41:50.523983] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:46.722 [2024-10-07 05:41:50.525464] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:46.980 [2024-10-07 05:41:50.736400] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:46.980 05:41:50 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:46.980 05:41:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:46.980 05:41:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:46.980 05:41:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:46.980 05:41:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:46.980 05:41:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.980 05:41:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.238 [2024-10-07 05:41:51.060900] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:47.238 05:41:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:47.238 "name": "raid_bdev1", 00:22:47.238 "uuid": "d384417d-5deb-4827-addb-7861a1e959c6", 00:22:47.238 "strip_size_kb": 0, 00:22:47.238 "state": "online", 00:22:47.238 "raid_level": "raid1", 00:22:47.238 "superblock": true, 00:22:47.238 "num_base_bdevs": 4, 00:22:47.238 "num_base_bdevs_discovered": 4, 00:22:47.238 "num_base_bdevs_operational": 4, 00:22:47.238 "process": { 00:22:47.238 "type": "rebuild", 00:22:47.238 "target": "spare", 00:22:47.238 "progress": { 00:22:47.238 "blocks": 14336, 00:22:47.238 "percent": 22 00:22:47.238 } 00:22:47.238 }, 00:22:47.238 "base_bdevs_list": [ 00:22:47.238 { 00:22:47.238 "name": "spare", 00:22:47.238 "uuid": "ff69f28a-1432-59df-adea-db4b1bb222bf", 00:22:47.238 "is_configured": true, 00:22:47.238 "data_offset": 2048, 00:22:47.238 "data_size": 63488 00:22:47.238 }, 00:22:47.238 { 00:22:47.238 "name": "BaseBdev2", 00:22:47.238 "uuid": "623b226f-bee9-59ac-bd09-43aaa98f5b89", 00:22:47.238 "is_configured": true, 00:22:47.238 "data_offset": 2048, 00:22:47.238 "data_size": 63488 00:22:47.238 }, 00:22:47.238 { 00:22:47.238 "name": "BaseBdev3", 00:22:47.238 "uuid": "a630134b-c675-5508-8493-095c954c2eac", 00:22:47.238 "is_configured": true, 00:22:47.238 "data_offset": 2048, 00:22:47.238 "data_size": 63488 00:22:47.238 }, 00:22:47.238 { 00:22:47.238 "name": "BaseBdev4", 00:22:47.238 "uuid": "5366cf55-4c29-58d8-9038-2aa86554c861", 00:22:47.238 "is_configured": true, 00:22:47.238 "data_offset": 2048, 00:22:47.238 "data_size": 63488 00:22:47.238 } 00:22:47.238 ] 00:22:47.238 }' 00:22:47.238 05:41:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:47.238 05:41:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:47.238 05:41:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:47.238 [2024-10-07 05:41:51.181372] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:47.496 05:41:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:47.496 05:41:51 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:47.496 [2024-10-07 05:41:51.396229] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:47.755 [2024-10-07 05:41:51.521598] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:47.755 [2024-10-07 05:41:51.538358] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:47.755 [2024-10-07 05:41:51.558785] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:22:47.755 05:41:51 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:47.755 05:41:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:47.755 05:41:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:47.755 05:41:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:47.755 05:41:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:47.755 05:41:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:47.755 05:41:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:47.755 05:41:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:47.755 05:41:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:47.755 05:41:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:47.755 05:41:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.755 05:41:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.013 05:41:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:48.013 "name": "raid_bdev1", 00:22:48.013 "uuid": "d384417d-5deb-4827-addb-7861a1e959c6", 00:22:48.013 "strip_size_kb": 0, 00:22:48.013 "state": "online", 00:22:48.013 "raid_level": "raid1", 00:22:48.013 "superblock": true, 00:22:48.013 "num_base_bdevs": 4, 00:22:48.013 "num_base_bdevs_discovered": 3, 00:22:48.013 "num_base_bdevs_operational": 3, 00:22:48.013 "base_bdevs_list": [ 00:22:48.013 { 00:22:48.013 "name": null, 00:22:48.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.013 "is_configured": false, 00:22:48.013 "data_offset": 2048, 00:22:48.013 "data_size": 63488 00:22:48.013 }, 00:22:48.013 { 00:22:48.013 "name": "BaseBdev2", 00:22:48.013 "uuid": "623b226f-bee9-59ac-bd09-43aaa98f5b89", 00:22:48.013 "is_configured": true, 00:22:48.013 "data_offset": 2048, 00:22:48.013 "data_size": 63488 00:22:48.013 }, 00:22:48.013 { 00:22:48.013 "name": "BaseBdev3", 00:22:48.013 "uuid": "a630134b-c675-5508-8493-095c954c2eac", 00:22:48.013 "is_configured": true, 00:22:48.013 "data_offset": 2048, 00:22:48.013 "data_size": 63488 00:22:48.013 }, 00:22:48.013 { 00:22:48.013 "name": "BaseBdev4", 00:22:48.013 "uuid": "5366cf55-4c29-58d8-9038-2aa86554c861", 00:22:48.013 "is_configured": true, 00:22:48.013 "data_offset": 2048, 00:22:48.013 "data_size": 63488 00:22:48.013 } 00:22:48.013 ] 00:22:48.013 }' 00:22:48.013 05:41:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:48.013 05:41:51 -- common/autotest_common.sh@10 -- # set +x 00:22:48.579 05:41:52 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:48.579 05:41:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:48.579 05:41:52 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:48.579 05:41:52 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:48.579 05:41:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:48.579 05:41:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.579 05:41:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.837 05:41:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:48.837 "name": "raid_bdev1", 00:22:48.837 "uuid": "d384417d-5deb-4827-addb-7861a1e959c6", 00:22:48.837 "strip_size_kb": 0, 00:22:48.837 "state": "online", 00:22:48.837 "raid_level": "raid1", 00:22:48.837 "superblock": true, 00:22:48.837 "num_base_bdevs": 4, 00:22:48.837 "num_base_bdevs_discovered": 3, 00:22:48.837 "num_base_bdevs_operational": 3, 00:22:48.837 "base_bdevs_list": [ 00:22:48.837 { 00:22:48.837 "name": null, 00:22:48.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.837 "is_configured": false, 00:22:48.837 "data_offset": 2048, 00:22:48.837 "data_size": 63488 00:22:48.837 }, 00:22:48.837 { 00:22:48.837 "name": "BaseBdev2", 00:22:48.837 "uuid": "623b226f-bee9-59ac-bd09-43aaa98f5b89", 00:22:48.837 "is_configured": true, 00:22:48.837 "data_offset": 2048, 00:22:48.837 "data_size": 63488 00:22:48.837 }, 00:22:48.837 { 00:22:48.837 "name": "BaseBdev3", 00:22:48.837 "uuid": "a630134b-c675-5508-8493-095c954c2eac", 00:22:48.837 "is_configured": true, 00:22:48.837 "data_offset": 2048, 00:22:48.837 "data_size": 63488 00:22:48.837 }, 00:22:48.837 { 00:22:48.837 "name": "BaseBdev4", 00:22:48.837 "uuid": "5366cf55-4c29-58d8-9038-2aa86554c861", 00:22:48.837 "is_configured": true, 00:22:48.837 "data_offset": 2048, 00:22:48.837 "data_size": 63488 00:22:48.837 } 00:22:48.837 ] 00:22:48.837 }' 00:22:48.837 05:41:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:48.837 05:41:52 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:48.837 05:41:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:49.095 05:41:52 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:49.095 05:41:52 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:49.095 [2024-10-07 05:41:53.046400] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:49.095 [2024-10-07 05:41:53.046472] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:49.353 05:41:53 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:49.353 [2024-10-07 05:41:53.099302] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:49.353 [2024-10-07 05:41:53.101387] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:49.353 [2024-10-07 05:41:53.229778] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:49.353 [2024-10-07 05:41:53.230275] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:49.611 [2024-10-07 05:41:53.439145] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:49.611 [2024-10-07 05:41:53.439416] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:49.869 [2024-10-07 05:41:53.781270] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:49.869 [2024-10-07 05:41:53.782972] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:50.127 [2024-10-07 05:41:54.016182] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:50.127 05:41:54 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:50.127 05:41:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:50.127 05:41:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:50.127 05:41:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:50.127 05:41:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:50.127 05:41:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.127 05:41:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.386 05:41:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:50.386 "name": "raid_bdev1", 00:22:50.386 "uuid": "d384417d-5deb-4827-addb-7861a1e959c6", 00:22:50.386 "strip_size_kb": 0, 00:22:50.386 "state": "online", 00:22:50.386 "raid_level": "raid1", 00:22:50.386 "superblock": true, 00:22:50.386 "num_base_bdevs": 4, 00:22:50.386 "num_base_bdevs_discovered": 4, 00:22:50.386 "num_base_bdevs_operational": 4, 00:22:50.386 "process": { 00:22:50.386 "type": "rebuild", 00:22:50.386 "target": "spare", 00:22:50.386 "progress": { 00:22:50.386 "blocks": 12288, 00:22:50.386 "percent": 19 00:22:50.386 } 00:22:50.386 }, 00:22:50.386 "base_bdevs_list": [ 00:22:50.386 { 00:22:50.386 "name": "spare", 00:22:50.386 "uuid": "ff69f28a-1432-59df-adea-db4b1bb222bf", 00:22:50.386 "is_configured": true, 00:22:50.386 "data_offset": 2048, 00:22:50.386 "data_size": 63488 00:22:50.386 }, 00:22:50.386 { 00:22:50.386 "name": "BaseBdev2", 00:22:50.386 "uuid": "623b226f-bee9-59ac-bd09-43aaa98f5b89", 00:22:50.386 "is_configured": true, 00:22:50.386 "data_offset": 2048, 00:22:50.386 "data_size": 63488 00:22:50.386 }, 00:22:50.386 { 00:22:50.386 "name": "BaseBdev3", 00:22:50.386 "uuid": "a630134b-c675-5508-8493-095c954c2eac", 00:22:50.386 "is_configured": true, 00:22:50.386 "data_offset": 2048, 00:22:50.386 "data_size": 63488 00:22:50.386 }, 00:22:50.386 { 00:22:50.386 "name": "BaseBdev4", 00:22:50.386 "uuid": "5366cf55-4c29-58d8-9038-2aa86554c861", 00:22:50.386 "is_configured": true, 00:22:50.386 "data_offset": 2048, 00:22:50.386 "data_size": 63488 00:22:50.386 } 00:22:50.386 ] 00:22:50.386 }' 00:22:50.386 05:41:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:50.386 [2024-10-07 05:41:54.350646] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:50.643 05:41:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:50.643 05:41:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:50.643 05:41:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:50.643 05:41:54 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:22:50.643 05:41:54 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:22:50.644 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:22:50.644 05:41:54 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:50.644 05:41:54 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:50.644 05:41:54 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:50.644 05:41:54 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:50.644 [2024-10-07 05:41:54.501058] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:50.901 [2024-10-07 05:41:54.632811] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:50.901 [2024-10-07 05:41:54.743063] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:50.901 [2024-10-07 05:41:54.755414] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005d40 00:22:50.901 [2024-10-07 05:41:54.755443] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005fb0 00:22:51.159 05:41:54 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:51.159 05:41:54 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:51.159 05:41:54 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:51.159 05:41:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:51.159 05:41:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:51.159 05:41:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:51.159 05:41:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:51.159 05:41:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.159 05:41:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.417 05:41:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:51.417 "name": "raid_bdev1", 00:22:51.417 "uuid": "d384417d-5deb-4827-addb-7861a1e959c6", 00:22:51.417 "strip_size_kb": 0, 00:22:51.417 "state": "online", 00:22:51.417 "raid_level": "raid1", 00:22:51.417 "superblock": true, 00:22:51.417 "num_base_bdevs": 4, 00:22:51.417 "num_base_bdevs_discovered": 3, 00:22:51.417 "num_base_bdevs_operational": 3, 00:22:51.417 "process": { 00:22:51.417 "type": "rebuild", 00:22:51.417 "target": "spare", 00:22:51.417 "progress": { 00:22:51.417 "blocks": 26624, 00:22:51.417 "percent": 41 00:22:51.417 } 00:22:51.417 }, 00:22:51.417 "base_bdevs_list": [ 00:22:51.417 { 00:22:51.417 "name": "spare", 00:22:51.417 "uuid": "ff69f28a-1432-59df-adea-db4b1bb222bf", 00:22:51.417 "is_configured": true, 00:22:51.417 "data_offset": 2048, 00:22:51.417 "data_size": 63488 00:22:51.417 }, 00:22:51.417 { 00:22:51.417 "name": null, 00:22:51.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.417 "is_configured": false, 00:22:51.417 "data_offset": 2048, 00:22:51.417 "data_size": 63488 00:22:51.417 }, 00:22:51.417 { 00:22:51.417 "name": "BaseBdev3", 00:22:51.417 "uuid": "a630134b-c675-5508-8493-095c954c2eac", 00:22:51.417 "is_configured": true, 00:22:51.417 "data_offset": 2048, 00:22:51.417 "data_size": 63488 00:22:51.417 }, 00:22:51.418 { 00:22:51.418 "name": "BaseBdev4", 00:22:51.418 "uuid": "5366cf55-4c29-58d8-9038-2aa86554c861", 00:22:51.418 "is_configured": true, 00:22:51.418 "data_offset": 2048, 00:22:51.418 "data_size": 63488 00:22:51.418 } 00:22:51.418 ] 00:22:51.418 }' 00:22:51.418 05:41:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:51.418 05:41:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:51.418 05:41:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:51.418 05:41:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:51.418 05:41:55 -- bdev/bdev_raid.sh@657 -- # local timeout=555 00:22:51.418 05:41:55 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:51.418 05:41:55 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:51.418 05:41:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:51.418 05:41:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:51.418 05:41:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:51.418 05:41:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:51.418 05:41:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.418 05:41:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.675 [2024-10-07 05:41:55.433708] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:22:51.675 05:41:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:51.675 "name": "raid_bdev1", 00:22:51.675 "uuid": "d384417d-5deb-4827-addb-7861a1e959c6", 00:22:51.675 "strip_size_kb": 0, 00:22:51.675 "state": "online", 00:22:51.675 "raid_level": "raid1", 00:22:51.675 "superblock": true, 00:22:51.675 "num_base_bdevs": 4, 00:22:51.675 "num_base_bdevs_discovered": 3, 00:22:51.675 "num_base_bdevs_operational": 3, 00:22:51.675 "process": { 00:22:51.675 "type": "rebuild", 00:22:51.675 "target": "spare", 00:22:51.675 "progress": { 00:22:51.675 "blocks": 32768, 00:22:51.675 "percent": 51 00:22:51.675 } 00:22:51.675 }, 00:22:51.675 "base_bdevs_list": [ 00:22:51.675 { 00:22:51.675 "name": "spare", 00:22:51.675 "uuid": "ff69f28a-1432-59df-adea-db4b1bb222bf", 00:22:51.675 "is_configured": true, 00:22:51.675 "data_offset": 2048, 00:22:51.675 "data_size": 63488 00:22:51.675 }, 00:22:51.675 { 00:22:51.675 "name": null, 00:22:51.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.675 "is_configured": false, 00:22:51.675 "data_offset": 2048, 00:22:51.675 "data_size": 63488 00:22:51.675 }, 00:22:51.675 { 00:22:51.675 "name": "BaseBdev3", 00:22:51.675 "uuid": "a630134b-c675-5508-8493-095c954c2eac", 00:22:51.675 "is_configured": true, 00:22:51.675 "data_offset": 2048, 00:22:51.675 "data_size": 63488 00:22:51.675 }, 00:22:51.675 { 00:22:51.675 "name": "BaseBdev4", 00:22:51.675 "uuid": "5366cf55-4c29-58d8-9038-2aa86554c861", 00:22:51.675 "is_configured": true, 00:22:51.675 "data_offset": 2048, 00:22:51.675 "data_size": 63488 00:22:51.675 } 00:22:51.675 ] 00:22:51.675 }' 00:22:51.675 05:41:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:51.675 05:41:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:51.675 05:41:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:51.675 05:41:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:51.675 05:41:55 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:51.934 [2024-10-07 05:41:55.670184] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:22:52.193 [2024-10-07 05:41:56.013111] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:22:52.193 [2024-10-07 05:41:56.129649] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:22:52.451 [2024-10-07 05:41:56.348629] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:22:52.709 [2024-10-07 05:41:56.573180] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:22:52.709 05:41:56 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:52.709 05:41:56 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:52.709 05:41:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:52.709 05:41:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:52.709 05:41:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:52.709 05:41:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:52.709 05:41:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.709 05:41:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.967 [2024-10-07 05:41:56.790979] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:22:52.967 05:41:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:52.967 "name": "raid_bdev1", 00:22:52.967 "uuid": "d384417d-5deb-4827-addb-7861a1e959c6", 00:22:52.967 "strip_size_kb": 0, 00:22:52.967 "state": "online", 00:22:52.967 "raid_level": "raid1", 00:22:52.967 "superblock": true, 00:22:52.967 "num_base_bdevs": 4, 00:22:52.967 "num_base_bdevs_discovered": 3, 00:22:52.967 "num_base_bdevs_operational": 3, 00:22:52.967 "process": { 00:22:52.967 "type": "rebuild", 00:22:52.967 "target": "spare", 00:22:52.967 "progress": { 00:22:52.967 "blocks": 49152, 00:22:52.967 "percent": 77 00:22:52.967 } 00:22:52.967 }, 00:22:52.967 "base_bdevs_list": [ 00:22:52.967 { 00:22:52.967 "name": "spare", 00:22:52.967 "uuid": "ff69f28a-1432-59df-adea-db4b1bb222bf", 00:22:52.967 "is_configured": true, 00:22:52.967 "data_offset": 2048, 00:22:52.967 "data_size": 63488 00:22:52.967 }, 00:22:52.967 { 00:22:52.967 "name": null, 00:22:52.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.967 "is_configured": false, 00:22:52.967 "data_offset": 2048, 00:22:52.967 "data_size": 63488 00:22:52.967 }, 00:22:52.967 { 00:22:52.967 "name": "BaseBdev3", 00:22:52.967 "uuid": "a630134b-c675-5508-8493-095c954c2eac", 00:22:52.967 "is_configured": true, 00:22:52.967 "data_offset": 2048, 00:22:52.967 "data_size": 63488 00:22:52.967 }, 00:22:52.967 { 00:22:52.967 "name": "BaseBdev4", 00:22:52.967 "uuid": "5366cf55-4c29-58d8-9038-2aa86554c861", 00:22:52.967 "is_configured": true, 00:22:52.967 "data_offset": 2048, 00:22:52.967 "data_size": 63488 00:22:52.967 } 00:22:52.967 ] 00:22:52.967 }' 00:22:52.967 05:41:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:52.967 05:41:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:52.967 05:41:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:52.967 05:41:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:52.967 05:41:56 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:52.967 [2024-10-07 05:41:56.907544] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:22:53.535 [2024-10-07 05:41:57.348813] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:22:53.799 [2024-10-07 05:41:57.680796] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:54.091 [2024-10-07 05:41:57.780809] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:54.091 [2024-10-07 05:41:57.783712] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:54.091 05:41:57 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:54.091 05:41:57 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:54.091 05:41:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:54.091 05:41:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:54.091 05:41:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:54.091 05:41:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:54.091 05:41:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.091 05:41:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.349 05:41:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:54.349 "name": "raid_bdev1", 00:22:54.349 "uuid": "d384417d-5deb-4827-addb-7861a1e959c6", 00:22:54.349 "strip_size_kb": 0, 00:22:54.349 "state": "online", 00:22:54.349 "raid_level": "raid1", 00:22:54.349 "superblock": true, 00:22:54.349 "num_base_bdevs": 4, 00:22:54.349 "num_base_bdevs_discovered": 3, 00:22:54.349 "num_base_bdevs_operational": 3, 00:22:54.349 "base_bdevs_list": [ 00:22:54.349 { 00:22:54.349 "name": "spare", 00:22:54.349 "uuid": "ff69f28a-1432-59df-adea-db4b1bb222bf", 00:22:54.349 "is_configured": true, 00:22:54.349 "data_offset": 2048, 00:22:54.349 "data_size": 63488 00:22:54.349 }, 00:22:54.349 { 00:22:54.349 "name": null, 00:22:54.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:54.349 "is_configured": false, 00:22:54.349 "data_offset": 2048, 00:22:54.349 "data_size": 63488 00:22:54.349 }, 00:22:54.349 { 00:22:54.349 "name": "BaseBdev3", 00:22:54.349 "uuid": "a630134b-c675-5508-8493-095c954c2eac", 00:22:54.349 "is_configured": true, 00:22:54.349 "data_offset": 2048, 00:22:54.349 "data_size": 63488 00:22:54.349 }, 00:22:54.349 { 00:22:54.349 "name": "BaseBdev4", 00:22:54.349 "uuid": "5366cf55-4c29-58d8-9038-2aa86554c861", 00:22:54.349 "is_configured": true, 00:22:54.349 "data_offset": 2048, 00:22:54.349 "data_size": 63488 00:22:54.349 } 00:22:54.349 ] 00:22:54.349 }' 00:22:54.349 05:41:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:54.349 05:41:58 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:54.349 05:41:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:54.349 05:41:58 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:54.349 05:41:58 -- bdev/bdev_raid.sh@660 -- # break 00:22:54.349 05:41:58 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:54.349 05:41:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:54.349 05:41:58 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:54.349 05:41:58 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:54.349 05:41:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:54.349 05:41:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.349 05:41:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.607 05:41:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:54.607 "name": "raid_bdev1", 00:22:54.607 "uuid": "d384417d-5deb-4827-addb-7861a1e959c6", 00:22:54.607 "strip_size_kb": 0, 00:22:54.607 "state": "online", 00:22:54.607 "raid_level": "raid1", 00:22:54.607 "superblock": true, 00:22:54.607 "num_base_bdevs": 4, 00:22:54.607 "num_base_bdevs_discovered": 3, 00:22:54.607 "num_base_bdevs_operational": 3, 00:22:54.607 "base_bdevs_list": [ 00:22:54.607 { 00:22:54.607 "name": "spare", 00:22:54.607 "uuid": "ff69f28a-1432-59df-adea-db4b1bb222bf", 00:22:54.607 "is_configured": true, 00:22:54.607 "data_offset": 2048, 00:22:54.607 "data_size": 63488 00:22:54.607 }, 00:22:54.607 { 00:22:54.607 "name": null, 00:22:54.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:54.607 "is_configured": false, 00:22:54.607 "data_offset": 2048, 00:22:54.607 "data_size": 63488 00:22:54.607 }, 00:22:54.607 { 00:22:54.607 "name": "BaseBdev3", 00:22:54.607 "uuid": "a630134b-c675-5508-8493-095c954c2eac", 00:22:54.607 "is_configured": true, 00:22:54.607 "data_offset": 2048, 00:22:54.607 "data_size": 63488 00:22:54.607 }, 00:22:54.607 { 00:22:54.607 "name": "BaseBdev4", 00:22:54.607 "uuid": "5366cf55-4c29-58d8-9038-2aa86554c861", 00:22:54.607 "is_configured": true, 00:22:54.607 "data_offset": 2048, 00:22:54.607 "data_size": 63488 00:22:54.607 } 00:22:54.607 ] 00:22:54.607 }' 00:22:54.607 05:41:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:54.607 05:41:58 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:54.607 05:41:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:54.607 05:41:58 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:54.607 05:41:58 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:54.607 05:41:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:54.607 05:41:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:54.607 05:41:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:54.607 05:41:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:54.607 05:41:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:54.607 05:41:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:54.607 05:41:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:54.607 05:41:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:54.607 05:41:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:54.607 05:41:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.607 05:41:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.864 05:41:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:54.864 "name": "raid_bdev1", 00:22:54.864 "uuid": "d384417d-5deb-4827-addb-7861a1e959c6", 00:22:54.864 "strip_size_kb": 0, 00:22:54.864 "state": "online", 00:22:54.864 "raid_level": "raid1", 00:22:54.864 "superblock": true, 00:22:54.864 "num_base_bdevs": 4, 00:22:54.864 "num_base_bdevs_discovered": 3, 00:22:54.864 "num_base_bdevs_operational": 3, 00:22:54.864 "base_bdevs_list": [ 00:22:54.864 { 00:22:54.864 "name": "spare", 00:22:54.864 "uuid": "ff69f28a-1432-59df-adea-db4b1bb222bf", 00:22:54.864 "is_configured": true, 00:22:54.864 "data_offset": 2048, 00:22:54.864 "data_size": 63488 00:22:54.864 }, 00:22:54.864 { 00:22:54.864 "name": null, 00:22:54.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:54.864 "is_configured": false, 00:22:54.864 "data_offset": 2048, 00:22:54.864 "data_size": 63488 00:22:54.864 }, 00:22:54.864 { 00:22:54.864 "name": "BaseBdev3", 00:22:54.864 "uuid": "a630134b-c675-5508-8493-095c954c2eac", 00:22:54.864 "is_configured": true, 00:22:54.864 "data_offset": 2048, 00:22:54.864 "data_size": 63488 00:22:54.864 }, 00:22:54.864 { 00:22:54.864 "name": "BaseBdev4", 00:22:54.864 "uuid": "5366cf55-4c29-58d8-9038-2aa86554c861", 00:22:54.864 "is_configured": true, 00:22:54.864 "data_offset": 2048, 00:22:54.864 "data_size": 63488 00:22:54.864 } 00:22:54.864 ] 00:22:54.864 }' 00:22:54.864 05:41:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:54.864 05:41:58 -- common/autotest_common.sh@10 -- # set +x 00:22:55.799 05:41:59 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:55.799 [2024-10-07 05:41:59.707681] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:55.799 [2024-10-07 05:41:59.707719] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:55.799 00:22:55.799 Latency(us) 00:22:55.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.799 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:55.799 raid_bdev1 : 11.14 102.13 306.40 0.00 0.00 13563.11 292.31 114390.11 00:22:55.799 =================================================================================================================== 00:22:55.799 Total : 102.13 306.40 0.00 0.00 13563.11 292.31 114390.11 00:22:55.799 [2024-10-07 05:41:59.770413] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:55.799 [2024-10-07 05:41:59.770458] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:55.799 [2024-10-07 05:41:59.770575] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:55.799 [2024-10-07 05:41:59.770591] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:22:55.799 0 00:22:56.057 05:41:59 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.057 05:41:59 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:56.314 05:42:00 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:56.314 05:42:00 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:22:56.314 05:42:00 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:56.314 05:42:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:56.314 05:42:00 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:22:56.314 05:42:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:56.314 05:42:00 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:56.314 05:42:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:56.314 05:42:00 -- bdev/nbd_common.sh@12 -- # local i 00:22:56.314 05:42:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:56.314 05:42:00 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:56.314 05:42:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:56.573 /dev/nbd0 00:22:56.573 05:42:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:56.573 05:42:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:56.573 05:42:00 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:56.573 05:42:00 -- common/autotest_common.sh@857 -- # local i 00:22:56.573 05:42:00 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:56.573 05:42:00 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:56.573 05:42:00 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:56.573 05:42:00 -- common/autotest_common.sh@861 -- # break 00:22:56.573 05:42:00 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:56.573 05:42:00 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:56.573 05:42:00 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:56.573 1+0 records in 00:22:56.573 1+0 records out 00:22:56.573 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320828 s, 12.8 MB/s 00:22:56.573 05:42:00 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:56.573 05:42:00 -- common/autotest_common.sh@874 -- # size=4096 00:22:56.573 05:42:00 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:56.573 05:42:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:56.573 05:42:00 -- common/autotest_common.sh@877 -- # return 0 00:22:56.573 05:42:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:56.573 05:42:00 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:56.573 05:42:00 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:56.573 05:42:00 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:22:56.573 05:42:00 -- bdev/bdev_raid.sh@678 -- # continue 00:22:56.573 05:42:00 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:56.573 05:42:00 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:22:56.573 05:42:00 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:22:56.573 05:42:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:56.573 05:42:00 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:22:56.573 05:42:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:56.573 05:42:00 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:56.573 05:42:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:56.573 05:42:00 -- bdev/nbd_common.sh@12 -- # local i 00:22:56.573 05:42:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:56.573 05:42:00 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:56.573 05:42:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:22:56.831 /dev/nbd1 00:22:56.831 05:42:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:56.831 05:42:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:56.831 05:42:00 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:56.831 05:42:00 -- common/autotest_common.sh@857 -- # local i 00:22:56.831 05:42:00 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:56.831 05:42:00 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:56.831 05:42:00 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:56.831 05:42:00 -- common/autotest_common.sh@861 -- # break 00:22:56.831 05:42:00 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:56.831 05:42:00 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:56.831 05:42:00 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:56.831 1+0 records in 00:22:56.831 1+0 records out 00:22:56.831 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562161 s, 7.3 MB/s 00:22:56.831 05:42:00 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:56.831 05:42:00 -- common/autotest_common.sh@874 -- # size=4096 00:22:56.831 05:42:00 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:56.831 05:42:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:56.831 05:42:00 -- common/autotest_common.sh@877 -- # return 0 00:22:56.831 05:42:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:56.831 05:42:00 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:56.831 05:42:00 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:56.831 05:42:00 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:56.831 05:42:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:56.831 05:42:00 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:56.831 05:42:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:56.831 05:42:00 -- bdev/nbd_common.sh@51 -- # local i 00:22:56.831 05:42:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:56.831 05:42:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:57.088 05:42:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:57.088 05:42:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:57.088 05:42:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:57.088 05:42:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:57.088 05:42:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:57.088 05:42:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:57.088 05:42:01 -- bdev/nbd_common.sh@41 -- # break 00:22:57.088 05:42:01 -- bdev/nbd_common.sh@45 -- # return 0 00:22:57.088 05:42:01 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:57.088 05:42:01 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:22:57.088 05:42:01 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:22:57.088 05:42:01 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:57.088 05:42:01 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:22:57.088 05:42:01 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:57.088 05:42:01 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:57.088 05:42:01 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:57.088 05:42:01 -- bdev/nbd_common.sh@12 -- # local i 00:22:57.088 05:42:01 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:57.088 05:42:01 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:57.088 05:42:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:22:57.346 /dev/nbd1 00:22:57.346 05:42:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:57.604 05:42:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:57.604 05:42:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:57.604 05:42:01 -- common/autotest_common.sh@857 -- # local i 00:22:57.604 05:42:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:57.604 05:42:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:57.604 05:42:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:57.604 05:42:01 -- common/autotest_common.sh@861 -- # break 00:22:57.604 05:42:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:57.604 05:42:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:57.604 05:42:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:57.604 1+0 records in 00:22:57.604 1+0 records out 00:22:57.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000817766 s, 5.0 MB/s 00:22:57.604 05:42:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:57.604 05:42:01 -- common/autotest_common.sh@874 -- # size=4096 00:22:57.604 05:42:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:57.604 05:42:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:57.604 05:42:01 -- common/autotest_common.sh@877 -- # return 0 00:22:57.604 05:42:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:57.604 05:42:01 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:57.604 05:42:01 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:57.604 05:42:01 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:57.604 05:42:01 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:57.604 05:42:01 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:57.604 05:42:01 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:57.604 05:42:01 -- bdev/nbd_common.sh@51 -- # local i 00:22:57.604 05:42:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:57.604 05:42:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:57.862 05:42:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:57.862 05:42:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:57.862 05:42:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:57.862 05:42:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:57.862 05:42:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:57.862 05:42:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:57.862 05:42:01 -- bdev/nbd_common.sh@41 -- # break 00:22:57.862 05:42:01 -- bdev/nbd_common.sh@45 -- # return 0 00:22:57.862 05:42:01 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:57.862 05:42:01 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:57.862 05:42:01 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:57.862 05:42:01 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:57.862 05:42:01 -- bdev/nbd_common.sh@51 -- # local i 00:22:57.862 05:42:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:57.862 05:42:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:58.120 05:42:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:58.120 05:42:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:58.120 05:42:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:58.120 05:42:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:58.120 05:42:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:58.120 05:42:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:58.120 05:42:01 -- bdev/nbd_common.sh@41 -- # break 00:22:58.120 05:42:01 -- bdev/nbd_common.sh@45 -- # return 0 00:22:58.120 05:42:01 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:22:58.120 05:42:01 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:58.120 05:42:01 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:22:58.120 05:42:01 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:58.379 05:42:02 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:58.637 [2024-10-07 05:42:02.416810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:58.637 [2024-10-07 05:42:02.416891] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:58.637 [2024-10-07 05:42:02.416938] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:22:58.637 [2024-10-07 05:42:02.416964] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:58.637 [2024-10-07 05:42:02.419252] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:58.637 [2024-10-07 05:42:02.419327] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:58.637 [2024-10-07 05:42:02.419447] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:58.637 [2024-10-07 05:42:02.419510] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:58.637 BaseBdev1 00:22:58.637 05:42:02 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:58.637 05:42:02 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:22:58.637 05:42:02 -- bdev/bdev_raid.sh@696 -- # continue 00:22:58.637 05:42:02 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:58.637 05:42:02 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:22:58.637 05:42:02 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:22:58.895 05:42:02 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:59.153 [2024-10-07 05:42:02.884949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:59.154 [2024-10-07 05:42:02.885019] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.154 [2024-10-07 05:42:02.885061] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:22:59.154 [2024-10-07 05:42:02.885086] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.154 [2024-10-07 05:42:02.885454] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.154 [2024-10-07 05:42:02.885522] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:59.154 [2024-10-07 05:42:02.885607] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:22:59.154 [2024-10-07 05:42:02.885623] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:22:59.154 [2024-10-07 05:42:02.885630] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:59.154 [2024-10-07 05:42:02.885647] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:22:59.154 [2024-10-07 05:42:02.885708] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:59.154 BaseBdev3 00:22:59.154 05:42:02 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:59.154 05:42:02 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:22:59.154 05:42:02 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:22:59.412 05:42:03 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:59.412 [2024-10-07 05:42:03.334722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:59.412 [2024-10-07 05:42:03.334822] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.412 [2024-10-07 05:42:03.334860] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:22:59.412 [2024-10-07 05:42:03.334894] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.412 [2024-10-07 05:42:03.335281] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.412 [2024-10-07 05:42:03.335354] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:59.412 [2024-10-07 05:42:03.335444] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:22:59.412 [2024-10-07 05:42:03.335470] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:59.412 BaseBdev4 00:22:59.412 05:42:03 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:59.670 05:42:03 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:59.928 [2024-10-07 05:42:03.710851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:59.928 [2024-10-07 05:42:03.710914] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.928 [2024-10-07 05:42:03.710947] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:22:59.928 [2024-10-07 05:42:03.710975] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.928 [2024-10-07 05:42:03.711349] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.928 [2024-10-07 05:42:03.711422] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:59.928 [2024-10-07 05:42:03.711515] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:22:59.928 [2024-10-07 05:42:03.711565] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:59.928 spare 00:22:59.928 05:42:03 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:59.928 05:42:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:59.928 05:42:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:59.928 05:42:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:59.928 05:42:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:59.928 05:42:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:59.928 05:42:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:59.928 05:42:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:59.928 05:42:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:59.928 05:42:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:59.928 05:42:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.928 05:42:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.928 [2024-10-07 05:42:03.811676] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c380 00:22:59.928 [2024-10-07 05:42:03.811700] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:59.928 [2024-10-07 05:42:03.811814] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:22:59.928 [2024-10-07 05:42:03.812163] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c380 00:22:59.928 [2024-10-07 05:42:03.812185] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c380 00:22:59.929 [2024-10-07 05:42:03.812332] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:00.189 05:42:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:00.189 "name": "raid_bdev1", 00:23:00.189 "uuid": "d384417d-5deb-4827-addb-7861a1e959c6", 00:23:00.189 "strip_size_kb": 0, 00:23:00.189 "state": "online", 00:23:00.189 "raid_level": "raid1", 00:23:00.189 "superblock": true, 00:23:00.189 "num_base_bdevs": 4, 00:23:00.189 "num_base_bdevs_discovered": 3, 00:23:00.189 "num_base_bdevs_operational": 3, 00:23:00.189 "base_bdevs_list": [ 00:23:00.189 { 00:23:00.189 "name": "spare", 00:23:00.189 "uuid": "ff69f28a-1432-59df-adea-db4b1bb222bf", 00:23:00.189 "is_configured": true, 00:23:00.189 "data_offset": 2048, 00:23:00.189 "data_size": 63488 00:23:00.189 }, 00:23:00.189 { 00:23:00.189 "name": null, 00:23:00.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.189 "is_configured": false, 00:23:00.189 "data_offset": 2048, 00:23:00.189 "data_size": 63488 00:23:00.189 }, 00:23:00.189 { 00:23:00.189 "name": "BaseBdev3", 00:23:00.189 "uuid": "a630134b-c675-5508-8493-095c954c2eac", 00:23:00.189 "is_configured": true, 00:23:00.189 "data_offset": 2048, 00:23:00.189 "data_size": 63488 00:23:00.189 }, 00:23:00.189 { 00:23:00.189 "name": "BaseBdev4", 00:23:00.189 "uuid": "5366cf55-4c29-58d8-9038-2aa86554c861", 00:23:00.189 "is_configured": true, 00:23:00.189 "data_offset": 2048, 00:23:00.189 "data_size": 63488 00:23:00.189 } 00:23:00.189 ] 00:23:00.189 }' 00:23:00.189 05:42:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:00.189 05:42:03 -- common/autotest_common.sh@10 -- # set +x 00:23:00.755 05:42:04 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:00.755 05:42:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:00.755 05:42:04 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:00.755 05:42:04 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:00.755 05:42:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:00.755 05:42:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.755 05:42:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.013 05:42:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:01.013 "name": "raid_bdev1", 00:23:01.013 "uuid": "d384417d-5deb-4827-addb-7861a1e959c6", 00:23:01.013 "strip_size_kb": 0, 00:23:01.013 "state": "online", 00:23:01.013 "raid_level": "raid1", 00:23:01.013 "superblock": true, 00:23:01.013 "num_base_bdevs": 4, 00:23:01.013 "num_base_bdevs_discovered": 3, 00:23:01.013 "num_base_bdevs_operational": 3, 00:23:01.013 "base_bdevs_list": [ 00:23:01.013 { 00:23:01.013 "name": "spare", 00:23:01.013 "uuid": "ff69f28a-1432-59df-adea-db4b1bb222bf", 00:23:01.013 "is_configured": true, 00:23:01.013 "data_offset": 2048, 00:23:01.013 "data_size": 63488 00:23:01.013 }, 00:23:01.013 { 00:23:01.013 "name": null, 00:23:01.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.013 "is_configured": false, 00:23:01.013 "data_offset": 2048, 00:23:01.013 "data_size": 63488 00:23:01.013 }, 00:23:01.013 { 00:23:01.013 "name": "BaseBdev3", 00:23:01.013 "uuid": "a630134b-c675-5508-8493-095c954c2eac", 00:23:01.013 "is_configured": true, 00:23:01.013 "data_offset": 2048, 00:23:01.013 "data_size": 63488 00:23:01.013 }, 00:23:01.013 { 00:23:01.013 "name": "BaseBdev4", 00:23:01.013 "uuid": "5366cf55-4c29-58d8-9038-2aa86554c861", 00:23:01.013 "is_configured": true, 00:23:01.013 "data_offset": 2048, 00:23:01.013 "data_size": 63488 00:23:01.013 } 00:23:01.013 ] 00:23:01.013 }' 00:23:01.013 05:42:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:01.013 05:42:04 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:01.013 05:42:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:01.013 05:42:04 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:01.013 05:42:04 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.013 05:42:04 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:01.271 05:42:05 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:23:01.271 05:42:05 -- bdev/bdev_raid.sh@709 -- # killprocess 170042 00:23:01.271 05:42:05 -- common/autotest_common.sh@926 -- # '[' -z 170042 ']' 00:23:01.271 05:42:05 -- common/autotest_common.sh@930 -- # kill -0 170042 00:23:01.271 05:42:05 -- common/autotest_common.sh@931 -- # uname 00:23:01.271 05:42:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:01.271 05:42:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 170042 00:23:01.271 killing process with pid 170042 00:23:01.271 Received shutdown signal, test time was about 16.579428 seconds 00:23:01.271 00:23:01.271 Latency(us) 00:23:01.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.271 =================================================================================================================== 00:23:01.271 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:01.271 05:42:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:01.271 05:42:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:01.271 05:42:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 170042' 00:23:01.271 05:42:05 -- common/autotest_common.sh@945 -- # kill 170042 00:23:01.271 [2024-10-07 05:42:05.192734] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:01.271 05:42:05 -- common/autotest_common.sh@950 -- # wait 170042 00:23:01.271 [2024-10-07 05:42:05.192806] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:01.271 [2024-10-07 05:42:05.192877] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:01.271 [2024-10-07 05:42:05.192890] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c380 name raid_bdev1, state offline 00:23:01.530 [2024-10-07 05:42:05.465922] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:02.465 05:42:06 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:02.465 ************************************ 00:23:02.465 END TEST raid_rebuild_test_sb_io 00:23:02.465 ************************************ 00:23:02.465 00:23:02.465 real 0m22.747s 00:23:02.465 user 0m36.548s 00:23:02.465 sys 0m2.879s 00:23:02.465 05:42:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:02.465 05:42:06 -- common/autotest_common.sh@10 -- # set +x 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@742 -- # '[' y == y ']' 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:23:02.723 05:42:06 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:23:02.723 05:42:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:02.723 05:42:06 -- common/autotest_common.sh@10 -- # set +x 00:23:02.723 ************************************ 00:23:02.723 START TEST raid5f_state_function_test 00:23:02.723 ************************************ 00:23:02.723 05:42:06 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 3 false 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@226 -- # raid_pid=170646 00:23:02.723 Process raid pid: 170646 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 170646' 00:23:02.723 05:42:06 -- bdev/bdev_raid.sh@228 -- # waitforlisten 170646 /var/tmp/spdk-raid.sock 00:23:02.723 05:42:06 -- common/autotest_common.sh@819 -- # '[' -z 170646 ']' 00:23:02.723 05:42:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:02.723 05:42:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:02.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:02.723 05:42:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:02.723 05:42:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:02.723 05:42:06 -- common/autotest_common.sh@10 -- # set +x 00:23:02.723 [2024-10-07 05:42:06.557055] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:23:02.724 [2024-10-07 05:42:06.557222] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.983 [2024-10-07 05:42:06.704957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.983 [2024-10-07 05:42:06.862217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.242 [2024-10-07 05:42:07.029453] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:03.500 05:42:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:03.500 05:42:07 -- common/autotest_common.sh@852 -- # return 0 00:23:03.500 05:42:07 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:03.759 [2024-10-07 05:42:07.683533] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:03.759 [2024-10-07 05:42:07.683609] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:03.759 [2024-10-07 05:42:07.683625] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:03.759 [2024-10-07 05:42:07.683647] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:03.759 [2024-10-07 05:42:07.683655] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:03.759 [2024-10-07 05:42:07.683703] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:03.759 05:42:07 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:03.759 05:42:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:03.759 05:42:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:03.759 05:42:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:03.759 05:42:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:03.759 05:42:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:03.759 05:42:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:03.759 05:42:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:03.759 05:42:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:03.759 05:42:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:03.759 05:42:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.759 05:42:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:04.017 05:42:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:04.017 "name": "Existed_Raid", 00:23:04.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.017 "strip_size_kb": 64, 00:23:04.017 "state": "configuring", 00:23:04.017 "raid_level": "raid5f", 00:23:04.017 "superblock": false, 00:23:04.017 "num_base_bdevs": 3, 00:23:04.017 "num_base_bdevs_discovered": 0, 00:23:04.017 "num_base_bdevs_operational": 3, 00:23:04.017 "base_bdevs_list": [ 00:23:04.017 { 00:23:04.017 "name": "BaseBdev1", 00:23:04.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.018 "is_configured": false, 00:23:04.018 "data_offset": 0, 00:23:04.018 "data_size": 0 00:23:04.018 }, 00:23:04.018 { 00:23:04.018 "name": "BaseBdev2", 00:23:04.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.018 "is_configured": false, 00:23:04.018 "data_offset": 0, 00:23:04.018 "data_size": 0 00:23:04.018 }, 00:23:04.018 { 00:23:04.018 "name": "BaseBdev3", 00:23:04.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.018 "is_configured": false, 00:23:04.018 "data_offset": 0, 00:23:04.018 "data_size": 0 00:23:04.018 } 00:23:04.018 ] 00:23:04.018 }' 00:23:04.018 05:42:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:04.018 05:42:07 -- common/autotest_common.sh@10 -- # set +x 00:23:04.584 05:42:08 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:04.841 [2024-10-07 05:42:08.679569] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:04.841 [2024-10-07 05:42:08.679607] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:23:04.841 05:42:08 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:05.100 [2024-10-07 05:42:08.855622] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:05.100 [2024-10-07 05:42:08.855679] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:05.100 [2024-10-07 05:42:08.855694] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:05.100 [2024-10-07 05:42:08.855723] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:05.100 [2024-10-07 05:42:08.855734] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:05.100 [2024-10-07 05:42:08.855761] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:05.100 05:42:08 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:05.358 [2024-10-07 05:42:09.133104] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:05.358 BaseBdev1 00:23:05.358 05:42:09 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:23:05.358 05:42:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:23:05.358 05:42:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:05.359 05:42:09 -- common/autotest_common.sh@889 -- # local i 00:23:05.359 05:42:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:05.359 05:42:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:05.359 05:42:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:05.617 05:42:09 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:05.617 [ 00:23:05.617 { 00:23:05.617 "name": "BaseBdev1", 00:23:05.617 "aliases": [ 00:23:05.617 "ec21008d-6de5-47ff-95b4-1e29f5374e69" 00:23:05.617 ], 00:23:05.617 "product_name": "Malloc disk", 00:23:05.617 "block_size": 512, 00:23:05.617 "num_blocks": 65536, 00:23:05.617 "uuid": "ec21008d-6de5-47ff-95b4-1e29f5374e69", 00:23:05.617 "assigned_rate_limits": { 00:23:05.617 "rw_ios_per_sec": 0, 00:23:05.617 "rw_mbytes_per_sec": 0, 00:23:05.617 "r_mbytes_per_sec": 0, 00:23:05.617 "w_mbytes_per_sec": 0 00:23:05.617 }, 00:23:05.617 "claimed": true, 00:23:05.617 "claim_type": "exclusive_write", 00:23:05.617 "zoned": false, 00:23:05.617 "supported_io_types": { 00:23:05.617 "read": true, 00:23:05.617 "write": true, 00:23:05.617 "unmap": true, 00:23:05.617 "write_zeroes": true, 00:23:05.617 "flush": true, 00:23:05.617 "reset": true, 00:23:05.617 "compare": false, 00:23:05.617 "compare_and_write": false, 00:23:05.617 "abort": true, 00:23:05.617 "nvme_admin": false, 00:23:05.617 "nvme_io": false 00:23:05.617 }, 00:23:05.617 "memory_domains": [ 00:23:05.617 { 00:23:05.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:05.617 "dma_device_type": 2 00:23:05.617 } 00:23:05.617 ], 00:23:05.617 "driver_specific": {} 00:23:05.617 } 00:23:05.617 ] 00:23:05.617 05:42:09 -- common/autotest_common.sh@895 -- # return 0 00:23:05.617 05:42:09 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:05.617 05:42:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:05.617 05:42:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:05.617 05:42:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:05.617 05:42:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:05.617 05:42:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:05.617 05:42:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:05.617 05:42:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:05.617 05:42:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:05.617 05:42:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:05.617 05:42:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.617 05:42:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:05.875 05:42:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:05.875 "name": "Existed_Raid", 00:23:05.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.875 "strip_size_kb": 64, 00:23:05.875 "state": "configuring", 00:23:05.875 "raid_level": "raid5f", 00:23:05.875 "superblock": false, 00:23:05.875 "num_base_bdevs": 3, 00:23:05.875 "num_base_bdevs_discovered": 1, 00:23:05.875 "num_base_bdevs_operational": 3, 00:23:05.875 "base_bdevs_list": [ 00:23:05.875 { 00:23:05.875 "name": "BaseBdev1", 00:23:05.875 "uuid": "ec21008d-6de5-47ff-95b4-1e29f5374e69", 00:23:05.875 "is_configured": true, 00:23:05.875 "data_offset": 0, 00:23:05.875 "data_size": 65536 00:23:05.875 }, 00:23:05.875 { 00:23:05.875 "name": "BaseBdev2", 00:23:05.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.875 "is_configured": false, 00:23:05.875 "data_offset": 0, 00:23:05.875 "data_size": 0 00:23:05.875 }, 00:23:05.875 { 00:23:05.875 "name": "BaseBdev3", 00:23:05.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.875 "is_configured": false, 00:23:05.875 "data_offset": 0, 00:23:05.875 "data_size": 0 00:23:05.875 } 00:23:05.875 ] 00:23:05.875 }' 00:23:05.875 05:42:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:05.875 05:42:09 -- common/autotest_common.sh@10 -- # set +x 00:23:06.441 05:42:10 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:06.699 [2024-10-07 05:42:10.525342] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:06.699 [2024-10-07 05:42:10.525386] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:23:06.699 05:42:10 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:23:06.699 05:42:10 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:06.958 [2024-10-07 05:42:10.793435] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:06.958 [2024-10-07 05:42:10.794989] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:06.958 [2024-10-07 05:42:10.795050] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:06.958 [2024-10-07 05:42:10.795064] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:06.958 [2024-10-07 05:42:10.795093] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:06.958 05:42:10 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:23:06.958 05:42:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:06.958 05:42:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:06.958 05:42:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:06.958 05:42:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:06.958 05:42:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:06.958 05:42:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:06.958 05:42:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:06.958 05:42:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:06.958 05:42:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:06.958 05:42:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:06.958 05:42:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:06.958 05:42:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:06.958 05:42:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.216 05:42:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:07.216 "name": "Existed_Raid", 00:23:07.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.216 "strip_size_kb": 64, 00:23:07.216 "state": "configuring", 00:23:07.216 "raid_level": "raid5f", 00:23:07.216 "superblock": false, 00:23:07.216 "num_base_bdevs": 3, 00:23:07.216 "num_base_bdevs_discovered": 1, 00:23:07.216 "num_base_bdevs_operational": 3, 00:23:07.216 "base_bdevs_list": [ 00:23:07.216 { 00:23:07.216 "name": "BaseBdev1", 00:23:07.216 "uuid": "ec21008d-6de5-47ff-95b4-1e29f5374e69", 00:23:07.216 "is_configured": true, 00:23:07.216 "data_offset": 0, 00:23:07.216 "data_size": 65536 00:23:07.216 }, 00:23:07.216 { 00:23:07.216 "name": "BaseBdev2", 00:23:07.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.216 "is_configured": false, 00:23:07.216 "data_offset": 0, 00:23:07.216 "data_size": 0 00:23:07.216 }, 00:23:07.216 { 00:23:07.216 "name": "BaseBdev3", 00:23:07.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.216 "is_configured": false, 00:23:07.216 "data_offset": 0, 00:23:07.216 "data_size": 0 00:23:07.216 } 00:23:07.216 ] 00:23:07.216 }' 00:23:07.216 05:42:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:07.216 05:42:11 -- common/autotest_common.sh@10 -- # set +x 00:23:07.783 05:42:11 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:08.065 [2024-10-07 05:42:11.957067] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:08.065 BaseBdev2 00:23:08.065 05:42:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:23:08.065 05:42:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:23:08.065 05:42:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:08.065 05:42:11 -- common/autotest_common.sh@889 -- # local i 00:23:08.065 05:42:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:08.065 05:42:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:08.065 05:42:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:08.335 05:42:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:08.593 [ 00:23:08.593 { 00:23:08.593 "name": "BaseBdev2", 00:23:08.593 "aliases": [ 00:23:08.593 "5302681a-e0d1-4723-b431-7ff29321046a" 00:23:08.593 ], 00:23:08.593 "product_name": "Malloc disk", 00:23:08.593 "block_size": 512, 00:23:08.593 "num_blocks": 65536, 00:23:08.593 "uuid": "5302681a-e0d1-4723-b431-7ff29321046a", 00:23:08.593 "assigned_rate_limits": { 00:23:08.593 "rw_ios_per_sec": 0, 00:23:08.593 "rw_mbytes_per_sec": 0, 00:23:08.593 "r_mbytes_per_sec": 0, 00:23:08.593 "w_mbytes_per_sec": 0 00:23:08.593 }, 00:23:08.593 "claimed": true, 00:23:08.593 "claim_type": "exclusive_write", 00:23:08.593 "zoned": false, 00:23:08.593 "supported_io_types": { 00:23:08.593 "read": true, 00:23:08.593 "write": true, 00:23:08.593 "unmap": true, 00:23:08.593 "write_zeroes": true, 00:23:08.593 "flush": true, 00:23:08.593 "reset": true, 00:23:08.593 "compare": false, 00:23:08.593 "compare_and_write": false, 00:23:08.593 "abort": true, 00:23:08.593 "nvme_admin": false, 00:23:08.593 "nvme_io": false 00:23:08.593 }, 00:23:08.593 "memory_domains": [ 00:23:08.593 { 00:23:08.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:08.593 "dma_device_type": 2 00:23:08.593 } 00:23:08.593 ], 00:23:08.593 "driver_specific": {} 00:23:08.593 } 00:23:08.593 ] 00:23:08.593 05:42:12 -- common/autotest_common.sh@895 -- # return 0 00:23:08.593 05:42:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:08.593 05:42:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:08.593 05:42:12 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:08.593 05:42:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:08.593 05:42:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:08.593 05:42:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:08.593 05:42:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:08.593 05:42:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:08.593 05:42:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:08.593 05:42:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:08.593 05:42:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:08.593 05:42:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:08.593 05:42:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.593 05:42:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:08.851 05:42:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:08.851 "name": "Existed_Raid", 00:23:08.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.851 "strip_size_kb": 64, 00:23:08.851 "state": "configuring", 00:23:08.851 "raid_level": "raid5f", 00:23:08.851 "superblock": false, 00:23:08.851 "num_base_bdevs": 3, 00:23:08.851 "num_base_bdevs_discovered": 2, 00:23:08.851 "num_base_bdevs_operational": 3, 00:23:08.851 "base_bdevs_list": [ 00:23:08.851 { 00:23:08.851 "name": "BaseBdev1", 00:23:08.851 "uuid": "ec21008d-6de5-47ff-95b4-1e29f5374e69", 00:23:08.851 "is_configured": true, 00:23:08.851 "data_offset": 0, 00:23:08.851 "data_size": 65536 00:23:08.851 }, 00:23:08.851 { 00:23:08.851 "name": "BaseBdev2", 00:23:08.851 "uuid": "5302681a-e0d1-4723-b431-7ff29321046a", 00:23:08.851 "is_configured": true, 00:23:08.851 "data_offset": 0, 00:23:08.851 "data_size": 65536 00:23:08.851 }, 00:23:08.851 { 00:23:08.851 "name": "BaseBdev3", 00:23:08.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.851 "is_configured": false, 00:23:08.851 "data_offset": 0, 00:23:08.851 "data_size": 0 00:23:08.851 } 00:23:08.851 ] 00:23:08.851 }' 00:23:08.851 05:42:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:08.851 05:42:12 -- common/autotest_common.sh@10 -- # set +x 00:23:09.418 05:42:13 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:09.676 [2024-10-07 05:42:13.453045] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:09.676 [2024-10-07 05:42:13.453126] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:23:09.676 [2024-10-07 05:42:13.453141] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:23:09.676 [2024-10-07 05:42:13.453256] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:23:09.676 [2024-10-07 05:42:13.457603] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:23:09.676 [2024-10-07 05:42:13.457632] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:23:09.676 [2024-10-07 05:42:13.457886] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:09.676 BaseBdev3 00:23:09.676 05:42:13 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:23:09.676 05:42:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:23:09.676 05:42:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:09.676 05:42:13 -- common/autotest_common.sh@889 -- # local i 00:23:09.676 05:42:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:09.676 05:42:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:09.676 05:42:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:09.935 05:42:13 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:09.935 [ 00:23:09.935 { 00:23:09.935 "name": "BaseBdev3", 00:23:09.935 "aliases": [ 00:23:09.935 "c3d4ebca-48ef-4b49-b847-e2f0407e8445" 00:23:09.935 ], 00:23:09.935 "product_name": "Malloc disk", 00:23:09.935 "block_size": 512, 00:23:09.935 "num_blocks": 65536, 00:23:09.935 "uuid": "c3d4ebca-48ef-4b49-b847-e2f0407e8445", 00:23:09.935 "assigned_rate_limits": { 00:23:09.935 "rw_ios_per_sec": 0, 00:23:09.935 "rw_mbytes_per_sec": 0, 00:23:09.935 "r_mbytes_per_sec": 0, 00:23:09.935 "w_mbytes_per_sec": 0 00:23:09.935 }, 00:23:09.935 "claimed": true, 00:23:09.935 "claim_type": "exclusive_write", 00:23:09.935 "zoned": false, 00:23:09.935 "supported_io_types": { 00:23:09.935 "read": true, 00:23:09.935 "write": true, 00:23:09.935 "unmap": true, 00:23:09.935 "write_zeroes": true, 00:23:09.935 "flush": true, 00:23:09.935 "reset": true, 00:23:09.935 "compare": false, 00:23:09.935 "compare_and_write": false, 00:23:09.935 "abort": true, 00:23:09.935 "nvme_admin": false, 00:23:09.935 "nvme_io": false 00:23:09.935 }, 00:23:09.935 "memory_domains": [ 00:23:09.935 { 00:23:09.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:09.935 "dma_device_type": 2 00:23:09.935 } 00:23:09.935 ], 00:23:09.935 "driver_specific": {} 00:23:09.935 } 00:23:09.935 ] 00:23:09.935 05:42:13 -- common/autotest_common.sh@895 -- # return 0 00:23:09.935 05:42:13 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:09.935 05:42:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:09.935 05:42:13 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:09.935 05:42:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:09.935 05:42:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:09.935 05:42:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:09.935 05:42:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:09.935 05:42:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:09.935 05:42:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:09.935 05:42:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:09.935 05:42:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:09.935 05:42:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:09.935 05:42:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.935 05:42:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:10.193 05:42:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:10.193 "name": "Existed_Raid", 00:23:10.193 "uuid": "3879851e-9610-4aa0-bce0-47ecffc3fc8b", 00:23:10.193 "strip_size_kb": 64, 00:23:10.193 "state": "online", 00:23:10.193 "raid_level": "raid5f", 00:23:10.193 "superblock": false, 00:23:10.193 "num_base_bdevs": 3, 00:23:10.193 "num_base_bdevs_discovered": 3, 00:23:10.193 "num_base_bdevs_operational": 3, 00:23:10.193 "base_bdevs_list": [ 00:23:10.193 { 00:23:10.193 "name": "BaseBdev1", 00:23:10.193 "uuid": "ec21008d-6de5-47ff-95b4-1e29f5374e69", 00:23:10.193 "is_configured": true, 00:23:10.193 "data_offset": 0, 00:23:10.193 "data_size": 65536 00:23:10.193 }, 00:23:10.193 { 00:23:10.193 "name": "BaseBdev2", 00:23:10.193 "uuid": "5302681a-e0d1-4723-b431-7ff29321046a", 00:23:10.193 "is_configured": true, 00:23:10.193 "data_offset": 0, 00:23:10.193 "data_size": 65536 00:23:10.193 }, 00:23:10.193 { 00:23:10.193 "name": "BaseBdev3", 00:23:10.193 "uuid": "c3d4ebca-48ef-4b49-b847-e2f0407e8445", 00:23:10.193 "is_configured": true, 00:23:10.193 "data_offset": 0, 00:23:10.193 "data_size": 65536 00:23:10.193 } 00:23:10.193 ] 00:23:10.193 }' 00:23:10.193 05:42:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:10.193 05:42:14 -- common/autotest_common.sh@10 -- # set +x 00:23:10.759 05:42:14 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:11.017 [2024-10-07 05:42:14.846588] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:11.017 05:42:14 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:23:11.017 05:42:14 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:23:11.017 05:42:14 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:11.017 05:42:14 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:11.017 05:42:14 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:23:11.017 05:42:14 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:23:11.017 05:42:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:11.017 05:42:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:11.017 05:42:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:11.017 05:42:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:11.017 05:42:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:11.017 05:42:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:11.017 05:42:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:11.017 05:42:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:11.017 05:42:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:11.017 05:42:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.017 05:42:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:11.275 05:42:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:11.275 "name": "Existed_Raid", 00:23:11.275 "uuid": "3879851e-9610-4aa0-bce0-47ecffc3fc8b", 00:23:11.275 "strip_size_kb": 64, 00:23:11.275 "state": "online", 00:23:11.275 "raid_level": "raid5f", 00:23:11.275 "superblock": false, 00:23:11.275 "num_base_bdevs": 3, 00:23:11.275 "num_base_bdevs_discovered": 2, 00:23:11.275 "num_base_bdevs_operational": 2, 00:23:11.275 "base_bdevs_list": [ 00:23:11.275 { 00:23:11.275 "name": null, 00:23:11.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.275 "is_configured": false, 00:23:11.275 "data_offset": 0, 00:23:11.275 "data_size": 65536 00:23:11.275 }, 00:23:11.275 { 00:23:11.275 "name": "BaseBdev2", 00:23:11.275 "uuid": "5302681a-e0d1-4723-b431-7ff29321046a", 00:23:11.275 "is_configured": true, 00:23:11.275 "data_offset": 0, 00:23:11.275 "data_size": 65536 00:23:11.275 }, 00:23:11.275 { 00:23:11.275 "name": "BaseBdev3", 00:23:11.275 "uuid": "c3d4ebca-48ef-4b49-b847-e2f0407e8445", 00:23:11.275 "is_configured": true, 00:23:11.275 "data_offset": 0, 00:23:11.275 "data_size": 65536 00:23:11.275 } 00:23:11.275 ] 00:23:11.275 }' 00:23:11.275 05:42:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:11.275 05:42:15 -- common/autotest_common.sh@10 -- # set +x 00:23:11.840 05:42:15 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:23:11.840 05:42:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:11.840 05:42:15 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.840 05:42:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:12.098 05:42:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:12.098 05:42:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:12.098 05:42:16 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:12.356 [2024-10-07 05:42:16.273669] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:12.356 [2024-10-07 05:42:16.273704] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:12.356 [2024-10-07 05:42:16.273768] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:12.615 05:42:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:12.615 05:42:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:12.615 05:42:16 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.615 05:42:16 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:12.615 05:42:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:12.615 05:42:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:12.615 05:42:16 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:12.873 [2024-10-07 05:42:16.768552] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:12.873 [2024-10-07 05:42:16.768624] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:23:12.873 05:42:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:12.873 05:42:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:12.873 05:42:16 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.873 05:42:16 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:23:13.130 05:42:17 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:23:13.130 05:42:17 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:23:13.130 05:42:17 -- bdev/bdev_raid.sh@287 -- # killprocess 170646 00:23:13.130 05:42:17 -- common/autotest_common.sh@926 -- # '[' -z 170646 ']' 00:23:13.130 05:42:17 -- common/autotest_common.sh@930 -- # kill -0 170646 00:23:13.130 05:42:17 -- common/autotest_common.sh@931 -- # uname 00:23:13.130 05:42:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:13.130 05:42:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 170646 00:23:13.130 05:42:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:13.130 killing process with pid 170646 00:23:13.130 05:42:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:13.130 05:42:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 170646' 00:23:13.130 05:42:17 -- common/autotest_common.sh@945 -- # kill 170646 00:23:13.130 [2024-10-07 05:42:17.040756] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:13.130 05:42:17 -- common/autotest_common.sh@950 -- # wait 170646 00:23:13.130 [2024-10-07 05:42:17.040869] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:14.064 05:42:17 -- bdev/bdev_raid.sh@289 -- # return 0 00:23:14.064 00:23:14.064 real 0m11.458s 00:23:14.064 user 0m20.191s 00:23:14.064 sys 0m1.446s 00:23:14.064 05:42:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:14.064 ************************************ 00:23:14.064 END TEST raid5f_state_function_test 00:23:14.064 05:42:17 -- common/autotest_common.sh@10 -- # set +x 00:23:14.064 ************************************ 00:23:14.064 05:42:17 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:23:14.064 05:42:17 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:23:14.064 05:42:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:14.064 05:42:17 -- common/autotest_common.sh@10 -- # set +x 00:23:14.064 ************************************ 00:23:14.064 START TEST raid5f_state_function_test_sb 00:23:14.064 ************************************ 00:23:14.064 05:42:18 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 3 true 00:23:14.064 05:42:18 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:23:14.064 05:42:18 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:23:14.064 05:42:18 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:23:14.064 05:42:18 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:23:14.064 05:42:18 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:23:14.064 05:42:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:14.064 05:42:18 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:23:14.064 05:42:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:14.064 05:42:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:14.064 05:42:18 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:23:14.064 05:42:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:14.064 05:42:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:14.064 05:42:18 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:23:14.064 05:42:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:14.064 05:42:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:14.064 05:42:18 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:14.064 05:42:18 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:23:14.064 05:42:18 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:23:14.064 05:42:18 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:23:14.064 05:42:18 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:23:14.064 05:42:18 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:23:14.064 05:42:18 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:23:14.064 05:42:18 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:23:14.064 05:42:18 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:23:14.064 05:42:18 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:23:14.064 05:42:18 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:23:14.064 05:42:18 -- bdev/bdev_raid.sh@226 -- # raid_pid=171023 00:23:14.064 05:42:18 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 171023' 00:23:14.064 Process raid pid: 171023 00:23:14.064 05:42:18 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:14.064 05:42:18 -- bdev/bdev_raid.sh@228 -- # waitforlisten 171023 /var/tmp/spdk-raid.sock 00:23:14.064 05:42:18 -- common/autotest_common.sh@819 -- # '[' -z 171023 ']' 00:23:14.064 05:42:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:14.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:14.064 05:42:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:14.064 05:42:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:14.064 05:42:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:14.064 05:42:18 -- common/autotest_common.sh@10 -- # set +x 00:23:14.322 [2024-10-07 05:42:18.086774] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:23:14.322 [2024-10-07 05:42:18.087520] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.322 [2024-10-07 05:42:18.255139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.587 [2024-10-07 05:42:18.413992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.847 [2024-10-07 05:42:18.580990] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:15.104 05:42:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:15.104 05:42:19 -- common/autotest_common.sh@852 -- # return 0 00:23:15.104 05:42:19 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:15.362 [2024-10-07 05:42:19.271347] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:15.362 [2024-10-07 05:42:19.271424] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:15.362 [2024-10-07 05:42:19.271438] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:15.362 [2024-10-07 05:42:19.271459] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:15.362 [2024-10-07 05:42:19.271467] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:15.363 [2024-10-07 05:42:19.271509] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:15.363 05:42:19 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:15.363 05:42:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:15.363 05:42:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:15.363 05:42:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:15.363 05:42:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:15.363 05:42:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:15.363 05:42:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:15.363 05:42:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:15.363 05:42:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:15.363 05:42:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:15.363 05:42:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.363 05:42:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:15.621 05:42:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:15.621 "name": "Existed_Raid", 00:23:15.621 "uuid": "d0899e04-986b-48b9-b583-c3fcedaa2413", 00:23:15.621 "strip_size_kb": 64, 00:23:15.621 "state": "configuring", 00:23:15.621 "raid_level": "raid5f", 00:23:15.621 "superblock": true, 00:23:15.621 "num_base_bdevs": 3, 00:23:15.621 "num_base_bdevs_discovered": 0, 00:23:15.621 "num_base_bdevs_operational": 3, 00:23:15.621 "base_bdevs_list": [ 00:23:15.621 { 00:23:15.621 "name": "BaseBdev1", 00:23:15.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.621 "is_configured": false, 00:23:15.621 "data_offset": 0, 00:23:15.621 "data_size": 0 00:23:15.621 }, 00:23:15.621 { 00:23:15.621 "name": "BaseBdev2", 00:23:15.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.621 "is_configured": false, 00:23:15.621 "data_offset": 0, 00:23:15.621 "data_size": 0 00:23:15.621 }, 00:23:15.621 { 00:23:15.621 "name": "BaseBdev3", 00:23:15.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.621 "is_configured": false, 00:23:15.621 "data_offset": 0, 00:23:15.621 "data_size": 0 00:23:15.621 } 00:23:15.621 ] 00:23:15.621 }' 00:23:15.621 05:42:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:15.621 05:42:19 -- common/autotest_common.sh@10 -- # set +x 00:23:16.187 05:42:20 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:16.444 [2024-10-07 05:42:20.343393] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:16.445 [2024-10-07 05:42:20.343429] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:23:16.445 05:42:20 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:16.702 [2024-10-07 05:42:20.583468] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:16.702 [2024-10-07 05:42:20.583528] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:16.702 [2024-10-07 05:42:20.583541] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:16.702 [2024-10-07 05:42:20.583567] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:16.702 [2024-10-07 05:42:20.583577] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:16.702 [2024-10-07 05:42:20.583602] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:16.702 05:42:20 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:16.959 [2024-10-07 05:42:20.805011] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:16.959 BaseBdev1 00:23:16.959 05:42:20 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:23:16.959 05:42:20 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:23:16.959 05:42:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:16.959 05:42:20 -- common/autotest_common.sh@889 -- # local i 00:23:16.959 05:42:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:16.959 05:42:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:16.959 05:42:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:17.217 05:42:21 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:17.474 [ 00:23:17.474 { 00:23:17.474 "name": "BaseBdev1", 00:23:17.474 "aliases": [ 00:23:17.474 "e2f5285e-cc2c-4868-b93c-3e09975c313f" 00:23:17.474 ], 00:23:17.474 "product_name": "Malloc disk", 00:23:17.474 "block_size": 512, 00:23:17.474 "num_blocks": 65536, 00:23:17.474 "uuid": "e2f5285e-cc2c-4868-b93c-3e09975c313f", 00:23:17.474 "assigned_rate_limits": { 00:23:17.474 "rw_ios_per_sec": 0, 00:23:17.474 "rw_mbytes_per_sec": 0, 00:23:17.474 "r_mbytes_per_sec": 0, 00:23:17.474 "w_mbytes_per_sec": 0 00:23:17.474 }, 00:23:17.474 "claimed": true, 00:23:17.474 "claim_type": "exclusive_write", 00:23:17.474 "zoned": false, 00:23:17.474 "supported_io_types": { 00:23:17.474 "read": true, 00:23:17.474 "write": true, 00:23:17.474 "unmap": true, 00:23:17.474 "write_zeroes": true, 00:23:17.474 "flush": true, 00:23:17.474 "reset": true, 00:23:17.474 "compare": false, 00:23:17.474 "compare_and_write": false, 00:23:17.474 "abort": true, 00:23:17.474 "nvme_admin": false, 00:23:17.474 "nvme_io": false 00:23:17.474 }, 00:23:17.474 "memory_domains": [ 00:23:17.474 { 00:23:17.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:17.474 "dma_device_type": 2 00:23:17.474 } 00:23:17.474 ], 00:23:17.474 "driver_specific": {} 00:23:17.474 } 00:23:17.474 ] 00:23:17.474 05:42:21 -- common/autotest_common.sh@895 -- # return 0 00:23:17.474 05:42:21 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:17.474 05:42:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:17.474 05:42:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:17.474 05:42:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:17.474 05:42:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:17.474 05:42:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:17.474 05:42:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:17.474 05:42:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:17.474 05:42:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:17.474 05:42:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:17.474 05:42:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:17.474 05:42:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.474 05:42:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:17.474 "name": "Existed_Raid", 00:23:17.474 "uuid": "3116d116-6e2d-48f0-ab00-988db14beb98", 00:23:17.474 "strip_size_kb": 64, 00:23:17.474 "state": "configuring", 00:23:17.474 "raid_level": "raid5f", 00:23:17.474 "superblock": true, 00:23:17.475 "num_base_bdevs": 3, 00:23:17.475 "num_base_bdevs_discovered": 1, 00:23:17.475 "num_base_bdevs_operational": 3, 00:23:17.475 "base_bdevs_list": [ 00:23:17.475 { 00:23:17.475 "name": "BaseBdev1", 00:23:17.475 "uuid": "e2f5285e-cc2c-4868-b93c-3e09975c313f", 00:23:17.475 "is_configured": true, 00:23:17.475 "data_offset": 2048, 00:23:17.475 "data_size": 63488 00:23:17.475 }, 00:23:17.475 { 00:23:17.475 "name": "BaseBdev2", 00:23:17.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.475 "is_configured": false, 00:23:17.475 "data_offset": 0, 00:23:17.475 "data_size": 0 00:23:17.475 }, 00:23:17.475 { 00:23:17.475 "name": "BaseBdev3", 00:23:17.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.475 "is_configured": false, 00:23:17.475 "data_offset": 0, 00:23:17.475 "data_size": 0 00:23:17.475 } 00:23:17.475 ] 00:23:17.475 }' 00:23:17.475 05:42:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:17.475 05:42:21 -- common/autotest_common.sh@10 -- # set +x 00:23:18.408 05:42:22 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:18.408 [2024-10-07 05:42:22.205251] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:18.408 [2024-10-07 05:42:22.205290] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:23:18.408 05:42:22 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:23:18.408 05:42:22 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:18.665 05:42:22 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:18.923 BaseBdev1 00:23:18.923 05:42:22 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:23:18.923 05:42:22 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:23:18.923 05:42:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:18.923 05:42:22 -- common/autotest_common.sh@889 -- # local i 00:23:18.923 05:42:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:18.923 05:42:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:18.923 05:42:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:18.923 05:42:22 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:19.180 [ 00:23:19.180 { 00:23:19.180 "name": "BaseBdev1", 00:23:19.180 "aliases": [ 00:23:19.180 "f1af1246-4e15-4ca2-a520-a795dc8e5373" 00:23:19.180 ], 00:23:19.180 "product_name": "Malloc disk", 00:23:19.180 "block_size": 512, 00:23:19.180 "num_blocks": 65536, 00:23:19.180 "uuid": "f1af1246-4e15-4ca2-a520-a795dc8e5373", 00:23:19.180 "assigned_rate_limits": { 00:23:19.180 "rw_ios_per_sec": 0, 00:23:19.180 "rw_mbytes_per_sec": 0, 00:23:19.180 "r_mbytes_per_sec": 0, 00:23:19.180 "w_mbytes_per_sec": 0 00:23:19.180 }, 00:23:19.180 "claimed": false, 00:23:19.180 "zoned": false, 00:23:19.180 "supported_io_types": { 00:23:19.180 "read": true, 00:23:19.180 "write": true, 00:23:19.180 "unmap": true, 00:23:19.180 "write_zeroes": true, 00:23:19.180 "flush": true, 00:23:19.180 "reset": true, 00:23:19.180 "compare": false, 00:23:19.180 "compare_and_write": false, 00:23:19.180 "abort": true, 00:23:19.180 "nvme_admin": false, 00:23:19.180 "nvme_io": false 00:23:19.180 }, 00:23:19.180 "memory_domains": [ 00:23:19.180 { 00:23:19.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:19.180 "dma_device_type": 2 00:23:19.180 } 00:23:19.180 ], 00:23:19.180 "driver_specific": {} 00:23:19.180 } 00:23:19.180 ] 00:23:19.180 05:42:23 -- common/autotest_common.sh@895 -- # return 0 00:23:19.180 05:42:23 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:19.437 [2024-10-07 05:42:23.263351] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:19.437 [2024-10-07 05:42:23.265073] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:19.437 [2024-10-07 05:42:23.265136] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:19.437 [2024-10-07 05:42:23.265150] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:19.437 [2024-10-07 05:42:23.265179] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:19.437 05:42:23 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:23:19.437 05:42:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:19.437 05:42:23 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:19.437 05:42:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:19.437 05:42:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:19.437 05:42:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:19.437 05:42:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:19.437 05:42:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:19.437 05:42:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:19.437 05:42:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:19.437 05:42:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:19.437 05:42:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:19.437 05:42:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.437 05:42:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:19.696 05:42:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:19.696 "name": "Existed_Raid", 00:23:19.696 "uuid": "4b98a23d-a276-4896-acce-1a7ff01815af", 00:23:19.696 "strip_size_kb": 64, 00:23:19.696 "state": "configuring", 00:23:19.696 "raid_level": "raid5f", 00:23:19.696 "superblock": true, 00:23:19.696 "num_base_bdevs": 3, 00:23:19.696 "num_base_bdevs_discovered": 1, 00:23:19.696 "num_base_bdevs_operational": 3, 00:23:19.696 "base_bdevs_list": [ 00:23:19.696 { 00:23:19.696 "name": "BaseBdev1", 00:23:19.696 "uuid": "f1af1246-4e15-4ca2-a520-a795dc8e5373", 00:23:19.696 "is_configured": true, 00:23:19.696 "data_offset": 2048, 00:23:19.696 "data_size": 63488 00:23:19.696 }, 00:23:19.696 { 00:23:19.696 "name": "BaseBdev2", 00:23:19.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.696 "is_configured": false, 00:23:19.696 "data_offset": 0, 00:23:19.696 "data_size": 0 00:23:19.696 }, 00:23:19.696 { 00:23:19.696 "name": "BaseBdev3", 00:23:19.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.696 "is_configured": false, 00:23:19.696 "data_offset": 0, 00:23:19.696 "data_size": 0 00:23:19.696 } 00:23:19.696 ] 00:23:19.696 }' 00:23:19.696 05:42:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:19.696 05:42:23 -- common/autotest_common.sh@10 -- # set +x 00:23:20.260 05:42:24 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:20.517 [2024-10-07 05:42:24.305208] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:20.517 BaseBdev2 00:23:20.517 05:42:24 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:23:20.517 05:42:24 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:23:20.517 05:42:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:20.517 05:42:24 -- common/autotest_common.sh@889 -- # local i 00:23:20.517 05:42:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:20.517 05:42:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:20.517 05:42:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:20.774 05:42:24 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:20.774 [ 00:23:20.774 { 00:23:20.774 "name": "BaseBdev2", 00:23:20.774 "aliases": [ 00:23:20.774 "7d6de43b-ba72-41fd-b7c8-5d22e8896abb" 00:23:20.774 ], 00:23:20.774 "product_name": "Malloc disk", 00:23:20.774 "block_size": 512, 00:23:20.774 "num_blocks": 65536, 00:23:20.774 "uuid": "7d6de43b-ba72-41fd-b7c8-5d22e8896abb", 00:23:20.774 "assigned_rate_limits": { 00:23:20.774 "rw_ios_per_sec": 0, 00:23:20.774 "rw_mbytes_per_sec": 0, 00:23:20.774 "r_mbytes_per_sec": 0, 00:23:20.774 "w_mbytes_per_sec": 0 00:23:20.774 }, 00:23:20.774 "claimed": true, 00:23:20.774 "claim_type": "exclusive_write", 00:23:20.774 "zoned": false, 00:23:20.774 "supported_io_types": { 00:23:20.774 "read": true, 00:23:20.774 "write": true, 00:23:20.774 "unmap": true, 00:23:20.774 "write_zeroes": true, 00:23:20.774 "flush": true, 00:23:20.774 "reset": true, 00:23:20.774 "compare": false, 00:23:20.774 "compare_and_write": false, 00:23:20.774 "abort": true, 00:23:20.774 "nvme_admin": false, 00:23:20.774 "nvme_io": false 00:23:20.774 }, 00:23:20.774 "memory_domains": [ 00:23:20.774 { 00:23:20.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:20.774 "dma_device_type": 2 00:23:20.774 } 00:23:20.774 ], 00:23:20.774 "driver_specific": {} 00:23:20.774 } 00:23:20.774 ] 00:23:20.774 05:42:24 -- common/autotest_common.sh@895 -- # return 0 00:23:21.032 05:42:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:21.032 05:42:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:21.032 05:42:24 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:21.032 05:42:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:21.032 05:42:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:21.032 05:42:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:21.032 05:42:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:21.032 05:42:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:21.032 05:42:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:21.032 05:42:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:21.032 05:42:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:21.032 05:42:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:21.032 05:42:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.032 05:42:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:21.032 05:42:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:21.032 "name": "Existed_Raid", 00:23:21.032 "uuid": "4b98a23d-a276-4896-acce-1a7ff01815af", 00:23:21.032 "strip_size_kb": 64, 00:23:21.032 "state": "configuring", 00:23:21.032 "raid_level": "raid5f", 00:23:21.032 "superblock": true, 00:23:21.032 "num_base_bdevs": 3, 00:23:21.032 "num_base_bdevs_discovered": 2, 00:23:21.032 "num_base_bdevs_operational": 3, 00:23:21.032 "base_bdevs_list": [ 00:23:21.032 { 00:23:21.032 "name": "BaseBdev1", 00:23:21.032 "uuid": "f1af1246-4e15-4ca2-a520-a795dc8e5373", 00:23:21.032 "is_configured": true, 00:23:21.032 "data_offset": 2048, 00:23:21.032 "data_size": 63488 00:23:21.032 }, 00:23:21.032 { 00:23:21.032 "name": "BaseBdev2", 00:23:21.032 "uuid": "7d6de43b-ba72-41fd-b7c8-5d22e8896abb", 00:23:21.032 "is_configured": true, 00:23:21.032 "data_offset": 2048, 00:23:21.032 "data_size": 63488 00:23:21.032 }, 00:23:21.032 { 00:23:21.032 "name": "BaseBdev3", 00:23:21.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.032 "is_configured": false, 00:23:21.032 "data_offset": 0, 00:23:21.032 "data_size": 0 00:23:21.032 } 00:23:21.032 ] 00:23:21.032 }' 00:23:21.032 05:42:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:21.032 05:42:25 -- common/autotest_common.sh@10 -- # set +x 00:23:21.598 05:42:25 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:21.868 [2024-10-07 05:42:25.813211] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:21.868 [2024-10-07 05:42:25.813496] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:23:21.868 [2024-10-07 05:42:25.813513] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:21.868 [2024-10-07 05:42:25.813644] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:23:21.868 BaseBdev3 00:23:21.868 [2024-10-07 05:42:25.817960] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:23:21.868 [2024-10-07 05:42:25.817986] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:23:21.868 [2024-10-07 05:42:25.818164] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:21.868 05:42:25 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:23:21.868 05:42:25 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:23:21.868 05:42:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:21.868 05:42:25 -- common/autotest_common.sh@889 -- # local i 00:23:21.868 05:42:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:21.868 05:42:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:21.868 05:42:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:22.150 05:42:25 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:22.408 [ 00:23:22.408 { 00:23:22.408 "name": "BaseBdev3", 00:23:22.408 "aliases": [ 00:23:22.408 "09db438b-007f-4c77-8fce-90a3272e64cf" 00:23:22.408 ], 00:23:22.408 "product_name": "Malloc disk", 00:23:22.408 "block_size": 512, 00:23:22.408 "num_blocks": 65536, 00:23:22.408 "uuid": "09db438b-007f-4c77-8fce-90a3272e64cf", 00:23:22.408 "assigned_rate_limits": { 00:23:22.408 "rw_ios_per_sec": 0, 00:23:22.408 "rw_mbytes_per_sec": 0, 00:23:22.408 "r_mbytes_per_sec": 0, 00:23:22.408 "w_mbytes_per_sec": 0 00:23:22.408 }, 00:23:22.408 "claimed": true, 00:23:22.408 "claim_type": "exclusive_write", 00:23:22.408 "zoned": false, 00:23:22.408 "supported_io_types": { 00:23:22.408 "read": true, 00:23:22.408 "write": true, 00:23:22.408 "unmap": true, 00:23:22.408 "write_zeroes": true, 00:23:22.408 "flush": true, 00:23:22.408 "reset": true, 00:23:22.408 "compare": false, 00:23:22.408 "compare_and_write": false, 00:23:22.408 "abort": true, 00:23:22.408 "nvme_admin": false, 00:23:22.408 "nvme_io": false 00:23:22.408 }, 00:23:22.408 "memory_domains": [ 00:23:22.408 { 00:23:22.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:22.408 "dma_device_type": 2 00:23:22.408 } 00:23:22.408 ], 00:23:22.408 "driver_specific": {} 00:23:22.408 } 00:23:22.408 ] 00:23:22.408 05:42:26 -- common/autotest_common.sh@895 -- # return 0 00:23:22.408 05:42:26 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:22.408 05:42:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:22.408 05:42:26 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:22.408 05:42:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:22.408 05:42:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:22.408 05:42:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:22.408 05:42:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:22.408 05:42:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:22.408 05:42:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:22.408 05:42:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:22.408 05:42:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:22.408 05:42:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:22.408 05:42:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.408 05:42:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:22.666 05:42:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:22.666 "name": "Existed_Raid", 00:23:22.666 "uuid": "4b98a23d-a276-4896-acce-1a7ff01815af", 00:23:22.666 "strip_size_kb": 64, 00:23:22.666 "state": "online", 00:23:22.666 "raid_level": "raid5f", 00:23:22.666 "superblock": true, 00:23:22.666 "num_base_bdevs": 3, 00:23:22.666 "num_base_bdevs_discovered": 3, 00:23:22.666 "num_base_bdevs_operational": 3, 00:23:22.666 "base_bdevs_list": [ 00:23:22.666 { 00:23:22.666 "name": "BaseBdev1", 00:23:22.666 "uuid": "f1af1246-4e15-4ca2-a520-a795dc8e5373", 00:23:22.666 "is_configured": true, 00:23:22.666 "data_offset": 2048, 00:23:22.666 "data_size": 63488 00:23:22.666 }, 00:23:22.666 { 00:23:22.666 "name": "BaseBdev2", 00:23:22.666 "uuid": "7d6de43b-ba72-41fd-b7c8-5d22e8896abb", 00:23:22.666 "is_configured": true, 00:23:22.666 "data_offset": 2048, 00:23:22.666 "data_size": 63488 00:23:22.666 }, 00:23:22.666 { 00:23:22.666 "name": "BaseBdev3", 00:23:22.666 "uuid": "09db438b-007f-4c77-8fce-90a3272e64cf", 00:23:22.666 "is_configured": true, 00:23:22.666 "data_offset": 2048, 00:23:22.666 "data_size": 63488 00:23:22.666 } 00:23:22.666 ] 00:23:22.666 }' 00:23:22.666 05:42:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:22.666 05:42:26 -- common/autotest_common.sh@10 -- # set +x 00:23:23.232 05:42:27 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:23.490 [2024-10-07 05:42:27.318872] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:23.490 05:42:27 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:23:23.490 05:42:27 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:23:23.490 05:42:27 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:23.490 05:42:27 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:23.490 05:42:27 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:23:23.490 05:42:27 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:23:23.490 05:42:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:23.490 05:42:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:23.490 05:42:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:23.490 05:42:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:23.490 05:42:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:23.490 05:42:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:23.490 05:42:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:23.490 05:42:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:23.490 05:42:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:23.490 05:42:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.490 05:42:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:23.749 05:42:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:23.749 "name": "Existed_Raid", 00:23:23.749 "uuid": "4b98a23d-a276-4896-acce-1a7ff01815af", 00:23:23.749 "strip_size_kb": 64, 00:23:23.749 "state": "online", 00:23:23.749 "raid_level": "raid5f", 00:23:23.749 "superblock": true, 00:23:23.749 "num_base_bdevs": 3, 00:23:23.749 "num_base_bdevs_discovered": 2, 00:23:23.749 "num_base_bdevs_operational": 2, 00:23:23.749 "base_bdevs_list": [ 00:23:23.749 { 00:23:23.749 "name": null, 00:23:23.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.749 "is_configured": false, 00:23:23.749 "data_offset": 2048, 00:23:23.749 "data_size": 63488 00:23:23.749 }, 00:23:23.749 { 00:23:23.749 "name": "BaseBdev2", 00:23:23.749 "uuid": "7d6de43b-ba72-41fd-b7c8-5d22e8896abb", 00:23:23.749 "is_configured": true, 00:23:23.749 "data_offset": 2048, 00:23:23.749 "data_size": 63488 00:23:23.749 }, 00:23:23.749 { 00:23:23.749 "name": "BaseBdev3", 00:23:23.749 "uuid": "09db438b-007f-4c77-8fce-90a3272e64cf", 00:23:23.749 "is_configured": true, 00:23:23.749 "data_offset": 2048, 00:23:23.749 "data_size": 63488 00:23:23.749 } 00:23:23.749 ] 00:23:23.749 }' 00:23:23.749 05:42:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:23.749 05:42:27 -- common/autotest_common.sh@10 -- # set +x 00:23:24.315 05:42:28 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:23:24.315 05:42:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:24.315 05:42:28 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.315 05:42:28 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:24.572 05:42:28 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:24.572 05:42:28 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:24.572 05:42:28 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:24.830 [2024-10-07 05:42:28.605761] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:24.830 [2024-10-07 05:42:28.605796] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:24.830 [2024-10-07 05:42:28.605857] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:24.830 05:42:28 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:24.830 05:42:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:24.830 05:42:28 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.830 05:42:28 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:25.089 05:42:28 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:25.089 05:42:28 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:25.089 05:42:28 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:25.089 [2024-10-07 05:42:29.040593] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:25.089 [2024-10-07 05:42:29.040657] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:23:25.347 05:42:29 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:25.347 05:42:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:25.347 05:42:29 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.347 05:42:29 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:23:25.347 05:42:29 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:23:25.347 05:42:29 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:23:25.347 05:42:29 -- bdev/bdev_raid.sh@287 -- # killprocess 171023 00:23:25.347 05:42:29 -- common/autotest_common.sh@926 -- # '[' -z 171023 ']' 00:23:25.347 05:42:29 -- common/autotest_common.sh@930 -- # kill -0 171023 00:23:25.347 05:42:29 -- common/autotest_common.sh@931 -- # uname 00:23:25.605 05:42:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:25.605 05:42:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 171023 00:23:25.605 killing process with pid 171023 00:23:25.605 05:42:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:25.605 05:42:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:25.605 05:42:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 171023' 00:23:25.606 05:42:29 -- common/autotest_common.sh@945 -- # kill 171023 00:23:25.606 05:42:29 -- common/autotest_common.sh@950 -- # wait 171023 00:23:25.606 [2024-10-07 05:42:29.339496] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:25.606 [2024-10-07 05:42:29.339590] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:26.542 ************************************ 00:23:26.542 END TEST raid5f_state_function_test_sb 00:23:26.542 ************************************ 00:23:26.542 05:42:30 -- bdev/bdev_raid.sh@289 -- # return 0 00:23:26.542 00:23:26.542 real 0m12.236s 00:23:26.542 user 0m21.683s 00:23:26.542 sys 0m1.420s 00:23:26.542 05:42:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:26.542 05:42:30 -- common/autotest_common.sh@10 -- # set +x 00:23:26.542 05:42:30 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:23:26.542 05:42:30 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:23:26.542 05:42:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:26.542 05:42:30 -- common/autotest_common.sh@10 -- # set +x 00:23:26.542 ************************************ 00:23:26.542 START TEST raid5f_superblock_test 00:23:26.542 ************************************ 00:23:26.542 05:42:30 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid5f 3 00:23:26.542 05:42:30 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:23:26.542 05:42:30 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:23:26.542 05:42:30 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:23:26.542 05:42:30 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:23:26.542 05:42:30 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:23:26.542 05:42:30 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:23:26.542 05:42:30 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:23:26.542 05:42:30 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:23:26.542 05:42:30 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:23:26.542 05:42:30 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:23:26.542 05:42:30 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:23:26.542 05:42:30 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:23:26.542 05:42:30 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:23:26.542 05:42:30 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:23:26.542 05:42:30 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:23:26.542 05:42:30 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:23:26.542 05:42:30 -- bdev/bdev_raid.sh@357 -- # raid_pid=171403 00:23:26.542 05:42:30 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:26.542 05:42:30 -- bdev/bdev_raid.sh@358 -- # waitforlisten 171403 /var/tmp/spdk-raid.sock 00:23:26.542 05:42:30 -- common/autotest_common.sh@819 -- # '[' -z 171403 ']' 00:23:26.542 05:42:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:26.542 05:42:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:26.542 05:42:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:26.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:26.542 05:42:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:26.542 05:42:30 -- common/autotest_common.sh@10 -- # set +x 00:23:26.542 [2024-10-07 05:42:30.389136] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:23:26.542 [2024-10-07 05:42:30.389346] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171403 ] 00:23:26.800 [2024-10-07 05:42:30.558294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.800 [2024-10-07 05:42:30.714619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.057 [2024-10-07 05:42:30.878345] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:27.623 05:42:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:27.623 05:42:31 -- common/autotest_common.sh@852 -- # return 0 00:23:27.623 05:42:31 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:23:27.623 05:42:31 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:27.623 05:42:31 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:23:27.623 05:42:31 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:23:27.623 05:42:31 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:27.623 05:42:31 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:27.623 05:42:31 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:27.623 05:42:31 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:27.623 05:42:31 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:23:27.623 malloc1 00:23:27.623 05:42:31 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:27.881 [2024-10-07 05:42:31.792231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:27.881 [2024-10-07 05:42:31.792323] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.881 [2024-10-07 05:42:31.792357] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:23:27.881 [2024-10-07 05:42:31.792404] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.881 [2024-10-07 05:42:31.794401] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.881 [2024-10-07 05:42:31.794461] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:27.881 pt1 00:23:27.881 05:42:31 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:27.881 05:42:31 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:27.881 05:42:31 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:23:27.881 05:42:31 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:23:27.881 05:42:31 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:27.881 05:42:31 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:27.881 05:42:31 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:27.881 05:42:31 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:27.881 05:42:31 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:23:28.139 malloc2 00:23:28.139 05:42:32 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:28.397 [2024-10-07 05:42:32.319645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:28.397 [2024-10-07 05:42:32.319710] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.397 [2024-10-07 05:42:32.319754] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:23:28.397 [2024-10-07 05:42:32.319806] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.397 [2024-10-07 05:42:32.321804] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.397 [2024-10-07 05:42:32.321857] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:28.397 pt2 00:23:28.397 05:42:32 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:28.397 05:42:32 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:28.397 05:42:32 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:23:28.397 05:42:32 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:23:28.397 05:42:32 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:28.397 05:42:32 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:28.397 05:42:32 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:28.397 05:42:32 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:28.397 05:42:32 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:23:28.655 malloc3 00:23:28.655 05:42:32 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:28.912 [2024-10-07 05:42:32.728424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:28.912 [2024-10-07 05:42:32.728501] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.912 [2024-10-07 05:42:32.728548] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:23:28.912 [2024-10-07 05:42:32.728594] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.912 [2024-10-07 05:42:32.730652] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.912 [2024-10-07 05:42:32.730711] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:28.912 pt3 00:23:28.912 05:42:32 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:28.912 05:42:32 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:28.912 05:42:32 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:23:29.170 [2024-10-07 05:42:32.920470] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:29.170 [2024-10-07 05:42:32.922306] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:29.170 [2024-10-07 05:42:32.922379] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:29.170 [2024-10-07 05:42:32.922618] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:23:29.170 [2024-10-07 05:42:32.922643] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:29.170 [2024-10-07 05:42:32.922759] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:23:29.170 [2024-10-07 05:42:32.927007] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:23:29.170 [2024-10-07 05:42:32.927033] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:23:29.170 [2024-10-07 05:42:32.927206] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:29.170 05:42:32 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:29.170 05:42:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:29.170 05:42:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:29.170 05:42:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:29.170 05:42:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:29.170 05:42:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:29.170 05:42:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:29.170 05:42:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:29.170 05:42:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:29.170 05:42:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:29.170 05:42:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.170 05:42:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.428 05:42:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:29.428 "name": "raid_bdev1", 00:23:29.428 "uuid": "1b6d1dd7-81ea-4454-ac0b-6b45eff6f7c7", 00:23:29.428 "strip_size_kb": 64, 00:23:29.428 "state": "online", 00:23:29.428 "raid_level": "raid5f", 00:23:29.428 "superblock": true, 00:23:29.428 "num_base_bdevs": 3, 00:23:29.428 "num_base_bdevs_discovered": 3, 00:23:29.428 "num_base_bdevs_operational": 3, 00:23:29.428 "base_bdevs_list": [ 00:23:29.428 { 00:23:29.428 "name": "pt1", 00:23:29.428 "uuid": "629bda2a-e4b9-5b64-b918-265dadb7570f", 00:23:29.428 "is_configured": true, 00:23:29.428 "data_offset": 2048, 00:23:29.428 "data_size": 63488 00:23:29.428 }, 00:23:29.428 { 00:23:29.428 "name": "pt2", 00:23:29.428 "uuid": "fc854b4a-5a6c-57fd-a2de-a4a0b033a424", 00:23:29.428 "is_configured": true, 00:23:29.428 "data_offset": 2048, 00:23:29.428 "data_size": 63488 00:23:29.428 }, 00:23:29.428 { 00:23:29.428 "name": "pt3", 00:23:29.428 "uuid": "609f6202-8c7b-5d73-9169-b90d5f492230", 00:23:29.428 "is_configured": true, 00:23:29.428 "data_offset": 2048, 00:23:29.428 "data_size": 63488 00:23:29.428 } 00:23:29.428 ] 00:23:29.428 }' 00:23:29.428 05:42:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:29.428 05:42:33 -- common/autotest_common.sh@10 -- # set +x 00:23:29.995 05:42:33 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:29.995 05:42:33 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:23:30.253 [2024-10-07 05:42:33.980018] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:30.253 05:42:33 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=1b6d1dd7-81ea-4454-ac0b-6b45eff6f7c7 00:23:30.253 05:42:33 -- bdev/bdev_raid.sh@380 -- # '[' -z 1b6d1dd7-81ea-4454-ac0b-6b45eff6f7c7 ']' 00:23:30.253 05:42:33 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:30.253 [2024-10-07 05:42:34.223943] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:30.253 [2024-10-07 05:42:34.223968] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:30.253 [2024-10-07 05:42:34.224037] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:30.253 [2024-10-07 05:42:34.224111] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:30.253 [2024-10-07 05:42:34.224130] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:23:30.511 05:42:34 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.511 05:42:34 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:23:30.511 05:42:34 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:23:30.511 05:42:34 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:23:30.511 05:42:34 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:30.511 05:42:34 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:30.769 05:42:34 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:30.769 05:42:34 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:31.027 05:42:34 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:31.027 05:42:34 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:31.285 05:42:35 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:31.285 05:42:35 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:31.285 05:42:35 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:23:31.285 05:42:35 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:31.285 05:42:35 -- common/autotest_common.sh@640 -- # local es=0 00:23:31.285 05:42:35 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:31.285 05:42:35 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:31.285 05:42:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:31.285 05:42:35 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:31.285 05:42:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:31.285 05:42:35 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:31.285 05:42:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:31.285 05:42:35 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:31.285 05:42:35 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:31.285 05:42:35 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:31.543 [2024-10-07 05:42:35.392131] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:31.543 [2024-10-07 05:42:35.393965] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:31.543 [2024-10-07 05:42:35.394022] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:31.543 [2024-10-07 05:42:35.394073] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:23:31.543 [2024-10-07 05:42:35.394145] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:23:31.543 [2024-10-07 05:42:35.394187] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:23:31.543 [2024-10-07 05:42:35.394239] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:31.543 [2024-10-07 05:42:35.394251] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring 00:23:31.543 request: 00:23:31.543 { 00:23:31.543 "name": "raid_bdev1", 00:23:31.543 "raid_level": "raid5f", 00:23:31.543 "base_bdevs": [ 00:23:31.543 "malloc1", 00:23:31.543 "malloc2", 00:23:31.543 "malloc3" 00:23:31.543 ], 00:23:31.543 "superblock": false, 00:23:31.543 "strip_size_kb": 64, 00:23:31.543 "method": "bdev_raid_create", 00:23:31.543 "req_id": 1 00:23:31.543 } 00:23:31.543 Got JSON-RPC error response 00:23:31.543 response: 00:23:31.543 { 00:23:31.543 "code": -17, 00:23:31.543 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:31.543 } 00:23:31.543 05:42:35 -- common/autotest_common.sh@643 -- # es=1 00:23:31.543 05:42:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:31.543 05:42:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:31.543 05:42:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:31.543 05:42:35 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.543 05:42:35 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:23:31.801 05:42:35 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:23:31.801 05:42:35 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:23:31.801 05:42:35 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:32.060 [2024-10-07 05:42:35.836161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:32.060 [2024-10-07 05:42:35.836231] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:32.060 [2024-10-07 05:42:35.836271] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:23:32.060 [2024-10-07 05:42:35.836295] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:32.060 [2024-10-07 05:42:35.838322] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:32.060 [2024-10-07 05:42:35.838374] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:32.060 [2024-10-07 05:42:35.838481] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:23:32.060 [2024-10-07 05:42:35.838557] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:32.060 pt1 00:23:32.060 05:42:35 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:32.060 05:42:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:32.060 05:42:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:32.060 05:42:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:32.060 05:42:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:32.060 05:42:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:32.060 05:42:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:32.060 05:42:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:32.060 05:42:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:32.060 05:42:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:32.060 05:42:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.060 05:42:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.060 05:42:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:32.060 "name": "raid_bdev1", 00:23:32.060 "uuid": "1b6d1dd7-81ea-4454-ac0b-6b45eff6f7c7", 00:23:32.060 "strip_size_kb": 64, 00:23:32.060 "state": "configuring", 00:23:32.060 "raid_level": "raid5f", 00:23:32.060 "superblock": true, 00:23:32.060 "num_base_bdevs": 3, 00:23:32.060 "num_base_bdevs_discovered": 1, 00:23:32.060 "num_base_bdevs_operational": 3, 00:23:32.060 "base_bdevs_list": [ 00:23:32.060 { 00:23:32.060 "name": "pt1", 00:23:32.060 "uuid": "629bda2a-e4b9-5b64-b918-265dadb7570f", 00:23:32.060 "is_configured": true, 00:23:32.060 "data_offset": 2048, 00:23:32.060 "data_size": 63488 00:23:32.060 }, 00:23:32.060 { 00:23:32.060 "name": null, 00:23:32.060 "uuid": "fc854b4a-5a6c-57fd-a2de-a4a0b033a424", 00:23:32.060 "is_configured": false, 00:23:32.060 "data_offset": 2048, 00:23:32.060 "data_size": 63488 00:23:32.060 }, 00:23:32.060 { 00:23:32.060 "name": null, 00:23:32.060 "uuid": "609f6202-8c7b-5d73-9169-b90d5f492230", 00:23:32.060 "is_configured": false, 00:23:32.060 "data_offset": 2048, 00:23:32.060 "data_size": 63488 00:23:32.060 } 00:23:32.060 ] 00:23:32.060 }' 00:23:32.060 05:42:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:32.060 05:42:36 -- common/autotest_common.sh@10 -- # set +x 00:23:32.626 05:42:36 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:23:32.626 05:42:36 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:32.884 [2024-10-07 05:42:36.732334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:32.884 [2024-10-07 05:42:36.732441] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:32.884 [2024-10-07 05:42:36.732495] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:23:32.884 [2024-10-07 05:42:36.732523] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:32.884 [2024-10-07 05:42:36.732980] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:32.884 [2024-10-07 05:42:36.733023] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:32.884 [2024-10-07 05:42:36.733125] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:32.884 [2024-10-07 05:42:36.733152] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:32.884 pt2 00:23:32.884 05:42:36 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:33.142 [2024-10-07 05:42:36.980381] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:33.142 05:42:36 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:33.142 05:42:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:33.142 05:42:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:33.142 05:42:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:33.142 05:42:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:33.142 05:42:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:33.142 05:42:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:33.142 05:42:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:33.142 05:42:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:33.142 05:42:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:33.142 05:42:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.142 05:42:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.400 05:42:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:33.400 "name": "raid_bdev1", 00:23:33.400 "uuid": "1b6d1dd7-81ea-4454-ac0b-6b45eff6f7c7", 00:23:33.400 "strip_size_kb": 64, 00:23:33.400 "state": "configuring", 00:23:33.400 "raid_level": "raid5f", 00:23:33.400 "superblock": true, 00:23:33.400 "num_base_bdevs": 3, 00:23:33.400 "num_base_bdevs_discovered": 1, 00:23:33.400 "num_base_bdevs_operational": 3, 00:23:33.400 "base_bdevs_list": [ 00:23:33.400 { 00:23:33.400 "name": "pt1", 00:23:33.400 "uuid": "629bda2a-e4b9-5b64-b918-265dadb7570f", 00:23:33.400 "is_configured": true, 00:23:33.400 "data_offset": 2048, 00:23:33.400 "data_size": 63488 00:23:33.400 }, 00:23:33.400 { 00:23:33.400 "name": null, 00:23:33.400 "uuid": "fc854b4a-5a6c-57fd-a2de-a4a0b033a424", 00:23:33.400 "is_configured": false, 00:23:33.400 "data_offset": 2048, 00:23:33.400 "data_size": 63488 00:23:33.400 }, 00:23:33.400 { 00:23:33.400 "name": null, 00:23:33.400 "uuid": "609f6202-8c7b-5d73-9169-b90d5f492230", 00:23:33.400 "is_configured": false, 00:23:33.400 "data_offset": 2048, 00:23:33.400 "data_size": 63488 00:23:33.400 } 00:23:33.400 ] 00:23:33.400 }' 00:23:33.400 05:42:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:33.400 05:42:37 -- common/autotest_common.sh@10 -- # set +x 00:23:33.967 05:42:37 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:23:33.967 05:42:37 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:33.967 05:42:37 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:34.228 [2024-10-07 05:42:38.044522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:34.228 [2024-10-07 05:42:38.044613] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:34.228 [2024-10-07 05:42:38.044650] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:34.228 [2024-10-07 05:42:38.044679] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:34.228 [2024-10-07 05:42:38.045110] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:34.228 [2024-10-07 05:42:38.045218] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:34.228 [2024-10-07 05:42:38.045336] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:34.228 [2024-10-07 05:42:38.045390] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:34.228 pt2 00:23:34.228 05:42:38 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:34.228 05:42:38 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:34.228 05:42:38 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:34.486 [2024-10-07 05:42:38.284576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:34.486 [2024-10-07 05:42:38.284656] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:34.486 [2024-10-07 05:42:38.284693] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:34.486 [2024-10-07 05:42:38.284724] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:34.486 [2024-10-07 05:42:38.285139] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:34.486 [2024-10-07 05:42:38.285194] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:34.486 [2024-10-07 05:42:38.285343] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:34.487 [2024-10-07 05:42:38.285369] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:34.487 [2024-10-07 05:42:38.285500] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:23:34.487 [2024-10-07 05:42:38.285526] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:34.487 [2024-10-07 05:42:38.285637] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:34.487 [2024-10-07 05:42:38.289688] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:23:34.487 [2024-10-07 05:42:38.289714] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:23:34.487 [2024-10-07 05:42:38.289919] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:34.487 pt3 00:23:34.487 05:42:38 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:34.487 05:42:38 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:34.487 05:42:38 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:34.487 05:42:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:34.487 05:42:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:34.487 05:42:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:34.487 05:42:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:34.487 05:42:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:34.487 05:42:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:34.487 05:42:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:34.487 05:42:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:34.487 05:42:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:34.487 05:42:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.487 05:42:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.745 05:42:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:34.745 "name": "raid_bdev1", 00:23:34.745 "uuid": "1b6d1dd7-81ea-4454-ac0b-6b45eff6f7c7", 00:23:34.745 "strip_size_kb": 64, 00:23:34.745 "state": "online", 00:23:34.745 "raid_level": "raid5f", 00:23:34.745 "superblock": true, 00:23:34.745 "num_base_bdevs": 3, 00:23:34.745 "num_base_bdevs_discovered": 3, 00:23:34.745 "num_base_bdevs_operational": 3, 00:23:34.745 "base_bdevs_list": [ 00:23:34.745 { 00:23:34.745 "name": "pt1", 00:23:34.745 "uuid": "629bda2a-e4b9-5b64-b918-265dadb7570f", 00:23:34.745 "is_configured": true, 00:23:34.745 "data_offset": 2048, 00:23:34.745 "data_size": 63488 00:23:34.745 }, 00:23:34.745 { 00:23:34.745 "name": "pt2", 00:23:34.745 "uuid": "fc854b4a-5a6c-57fd-a2de-a4a0b033a424", 00:23:34.745 "is_configured": true, 00:23:34.745 "data_offset": 2048, 00:23:34.745 "data_size": 63488 00:23:34.745 }, 00:23:34.745 { 00:23:34.745 "name": "pt3", 00:23:34.745 "uuid": "609f6202-8c7b-5d73-9169-b90d5f492230", 00:23:34.745 "is_configured": true, 00:23:34.745 "data_offset": 2048, 00:23:34.745 "data_size": 63488 00:23:34.745 } 00:23:34.745 ] 00:23:34.745 }' 00:23:34.745 05:42:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:34.745 05:42:38 -- common/autotest_common.sh@10 -- # set +x 00:23:35.312 05:42:39 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:35.312 05:42:39 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:23:35.570 [2024-10-07 05:42:39.290434] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:35.570 05:42:39 -- bdev/bdev_raid.sh@430 -- # '[' 1b6d1dd7-81ea-4454-ac0b-6b45eff6f7c7 '!=' 1b6d1dd7-81ea-4454-ac0b-6b45eff6f7c7 ']' 00:23:35.570 05:42:39 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:23:35.570 05:42:39 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:35.570 05:42:39 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:35.570 05:42:39 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:35.829 [2024-10-07 05:42:39.554367] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:35.829 05:42:39 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:35.829 05:42:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:35.829 05:42:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:35.829 05:42:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:35.829 05:42:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:35.829 05:42:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:35.829 05:42:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:35.829 05:42:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:35.829 05:42:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:35.829 05:42:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:35.829 05:42:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.829 05:42:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.116 05:42:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:36.116 "name": "raid_bdev1", 00:23:36.116 "uuid": "1b6d1dd7-81ea-4454-ac0b-6b45eff6f7c7", 00:23:36.116 "strip_size_kb": 64, 00:23:36.116 "state": "online", 00:23:36.116 "raid_level": "raid5f", 00:23:36.116 "superblock": true, 00:23:36.116 "num_base_bdevs": 3, 00:23:36.116 "num_base_bdevs_discovered": 2, 00:23:36.116 "num_base_bdevs_operational": 2, 00:23:36.116 "base_bdevs_list": [ 00:23:36.116 { 00:23:36.116 "name": null, 00:23:36.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.116 "is_configured": false, 00:23:36.116 "data_offset": 2048, 00:23:36.116 "data_size": 63488 00:23:36.116 }, 00:23:36.116 { 00:23:36.116 "name": "pt2", 00:23:36.116 "uuid": "fc854b4a-5a6c-57fd-a2de-a4a0b033a424", 00:23:36.116 "is_configured": true, 00:23:36.116 "data_offset": 2048, 00:23:36.116 "data_size": 63488 00:23:36.116 }, 00:23:36.116 { 00:23:36.116 "name": "pt3", 00:23:36.116 "uuid": "609f6202-8c7b-5d73-9169-b90d5f492230", 00:23:36.116 "is_configured": true, 00:23:36.116 "data_offset": 2048, 00:23:36.116 "data_size": 63488 00:23:36.116 } 00:23:36.116 ] 00:23:36.116 }' 00:23:36.116 05:42:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:36.116 05:42:39 -- common/autotest_common.sh@10 -- # set +x 00:23:36.691 05:42:40 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:36.949 [2024-10-07 05:42:40.678548] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:36.949 [2024-10-07 05:42:40.678712] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:36.949 [2024-10-07 05:42:40.678880] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:36.949 [2024-10-07 05:42:40.678998] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:36.949 [2024-10-07 05:42:40.679195] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:23:36.949 05:42:40 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.949 05:42:40 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:23:36.949 05:42:40 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:23:36.949 05:42:40 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:23:36.949 05:42:40 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:23:36.949 05:42:40 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:36.949 05:42:40 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:37.207 05:42:41 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:37.207 05:42:41 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:37.207 05:42:41 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:37.464 05:42:41 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:37.464 05:42:41 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:37.464 05:42:41 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:23:37.464 05:42:41 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:37.464 05:42:41 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:37.722 [2024-10-07 05:42:41.474699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:37.722 [2024-10-07 05:42:41.474959] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:37.722 [2024-10-07 05:42:41.475123] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:23:37.722 [2024-10-07 05:42:41.475279] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:37.722 [2024-10-07 05:42:41.477555] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:37.722 [2024-10-07 05:42:41.477737] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:37.722 [2024-10-07 05:42:41.477997] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:37.722 [2024-10-07 05:42:41.478171] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:37.722 pt2 00:23:37.722 05:42:41 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:23:37.722 05:42:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:37.722 05:42:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:37.722 05:42:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:37.722 05:42:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:37.722 05:42:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:37.722 05:42:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:37.722 05:42:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:37.722 05:42:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:37.722 05:42:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:37.722 05:42:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.722 05:42:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.980 05:42:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:37.980 "name": "raid_bdev1", 00:23:37.980 "uuid": "1b6d1dd7-81ea-4454-ac0b-6b45eff6f7c7", 00:23:37.980 "strip_size_kb": 64, 00:23:37.980 "state": "configuring", 00:23:37.980 "raid_level": "raid5f", 00:23:37.980 "superblock": true, 00:23:37.980 "num_base_bdevs": 3, 00:23:37.980 "num_base_bdevs_discovered": 1, 00:23:37.980 "num_base_bdevs_operational": 2, 00:23:37.980 "base_bdevs_list": [ 00:23:37.980 { 00:23:37.980 "name": null, 00:23:37.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.980 "is_configured": false, 00:23:37.980 "data_offset": 2048, 00:23:37.980 "data_size": 63488 00:23:37.980 }, 00:23:37.980 { 00:23:37.980 "name": "pt2", 00:23:37.980 "uuid": "fc854b4a-5a6c-57fd-a2de-a4a0b033a424", 00:23:37.980 "is_configured": true, 00:23:37.980 "data_offset": 2048, 00:23:37.980 "data_size": 63488 00:23:37.980 }, 00:23:37.980 { 00:23:37.980 "name": null, 00:23:37.980 "uuid": "609f6202-8c7b-5d73-9169-b90d5f492230", 00:23:37.980 "is_configured": false, 00:23:37.980 "data_offset": 2048, 00:23:37.980 "data_size": 63488 00:23:37.980 } 00:23:37.980 ] 00:23:37.980 }' 00:23:37.980 05:42:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:37.980 05:42:41 -- common/autotest_common.sh@10 -- # set +x 00:23:38.238 05:42:42 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:23:38.238 05:42:42 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:38.238 05:42:42 -- bdev/bdev_raid.sh@462 -- # i=2 00:23:38.238 05:42:42 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:38.496 [2024-10-07 05:42:42.362891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:38.496 [2024-10-07 05:42:42.363107] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:38.496 [2024-10-07 05:42:42.363279] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:23:38.496 [2024-10-07 05:42:42.363427] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:38.496 [2024-10-07 05:42:42.363959] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:38.496 [2024-10-07 05:42:42.364155] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:38.496 [2024-10-07 05:42:42.364400] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:38.496 [2024-10-07 05:42:42.364551] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:38.496 [2024-10-07 05:42:42.364769] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:23:38.496 [2024-10-07 05:42:42.364911] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:38.496 [2024-10-07 05:42:42.365096] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:38.496 [2024-10-07 05:42:42.369195] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:23:38.496 [2024-10-07 05:42:42.369343] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:23:38.496 [2024-10-07 05:42:42.369717] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:38.496 pt3 00:23:38.496 05:42:42 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:38.496 05:42:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:38.496 05:42:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:38.496 05:42:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:38.496 05:42:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:38.496 05:42:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:38.496 05:42:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:38.496 05:42:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:38.496 05:42:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:38.496 05:42:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:38.496 05:42:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.496 05:42:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.754 05:42:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:38.754 "name": "raid_bdev1", 00:23:38.754 "uuid": "1b6d1dd7-81ea-4454-ac0b-6b45eff6f7c7", 00:23:38.754 "strip_size_kb": 64, 00:23:38.754 "state": "online", 00:23:38.754 "raid_level": "raid5f", 00:23:38.754 "superblock": true, 00:23:38.754 "num_base_bdevs": 3, 00:23:38.754 "num_base_bdevs_discovered": 2, 00:23:38.754 "num_base_bdevs_operational": 2, 00:23:38.754 "base_bdevs_list": [ 00:23:38.754 { 00:23:38.754 "name": null, 00:23:38.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.754 "is_configured": false, 00:23:38.754 "data_offset": 2048, 00:23:38.754 "data_size": 63488 00:23:38.754 }, 00:23:38.754 { 00:23:38.754 "name": "pt2", 00:23:38.754 "uuid": "fc854b4a-5a6c-57fd-a2de-a4a0b033a424", 00:23:38.754 "is_configured": true, 00:23:38.754 "data_offset": 2048, 00:23:38.754 "data_size": 63488 00:23:38.754 }, 00:23:38.754 { 00:23:38.754 "name": "pt3", 00:23:38.754 "uuid": "609f6202-8c7b-5d73-9169-b90d5f492230", 00:23:38.754 "is_configured": true, 00:23:38.754 "data_offset": 2048, 00:23:38.754 "data_size": 63488 00:23:38.754 } 00:23:38.754 ] 00:23:38.754 }' 00:23:38.754 05:42:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:38.754 05:42:42 -- common/autotest_common.sh@10 -- # set +x 00:23:39.320 05:42:43 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:23:39.320 05:42:43 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:39.578 [2024-10-07 05:42:43.378105] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:39.579 [2024-10-07 05:42:43.378257] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:39.579 [2024-10-07 05:42:43.378408] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:39.579 [2024-10-07 05:42:43.378597] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:39.579 [2024-10-07 05:42:43.378730] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:23:39.579 05:42:43 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.579 05:42:43 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:23:39.836 05:42:43 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:23:39.836 05:42:43 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:23:39.836 05:42:43 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:39.836 [2024-10-07 05:42:43.758173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:39.836 [2024-10-07 05:42:43.758383] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:39.836 [2024-10-07 05:42:43.758467] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:39.836 [2024-10-07 05:42:43.758716] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:39.836 [2024-10-07 05:42:43.760916] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:39.836 [2024-10-07 05:42:43.761109] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:39.836 [2024-10-07 05:42:43.761343] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:23:39.836 [2024-10-07 05:42:43.761502] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:39.836 pt1 00:23:39.836 05:42:43 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:39.836 05:42:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:39.836 05:42:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:39.836 05:42:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:39.836 05:42:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:39.836 05:42:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:39.836 05:42:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:39.836 05:42:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:39.836 05:42:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:39.837 05:42:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:39.837 05:42:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.837 05:42:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.095 05:42:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:40.095 "name": "raid_bdev1", 00:23:40.095 "uuid": "1b6d1dd7-81ea-4454-ac0b-6b45eff6f7c7", 00:23:40.095 "strip_size_kb": 64, 00:23:40.095 "state": "configuring", 00:23:40.095 "raid_level": "raid5f", 00:23:40.095 "superblock": true, 00:23:40.095 "num_base_bdevs": 3, 00:23:40.095 "num_base_bdevs_discovered": 1, 00:23:40.095 "num_base_bdevs_operational": 3, 00:23:40.095 "base_bdevs_list": [ 00:23:40.095 { 00:23:40.095 "name": "pt1", 00:23:40.095 "uuid": "629bda2a-e4b9-5b64-b918-265dadb7570f", 00:23:40.095 "is_configured": true, 00:23:40.095 "data_offset": 2048, 00:23:40.095 "data_size": 63488 00:23:40.095 }, 00:23:40.095 { 00:23:40.095 "name": null, 00:23:40.095 "uuid": "fc854b4a-5a6c-57fd-a2de-a4a0b033a424", 00:23:40.095 "is_configured": false, 00:23:40.095 "data_offset": 2048, 00:23:40.095 "data_size": 63488 00:23:40.095 }, 00:23:40.095 { 00:23:40.095 "name": null, 00:23:40.095 "uuid": "609f6202-8c7b-5d73-9169-b90d5f492230", 00:23:40.095 "is_configured": false, 00:23:40.095 "data_offset": 2048, 00:23:40.095 "data_size": 63488 00:23:40.095 } 00:23:40.095 ] 00:23:40.095 }' 00:23:40.095 05:42:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:40.095 05:42:43 -- common/autotest_common.sh@10 -- # set +x 00:23:40.660 05:42:44 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:23:40.660 05:42:44 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:40.660 05:42:44 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:40.917 05:42:44 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:40.917 05:42:44 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:40.917 05:42:44 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:40.917 05:42:44 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:40.917 05:42:44 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:40.917 05:42:44 -- bdev/bdev_raid.sh@489 -- # i=2 00:23:40.917 05:42:44 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:41.175 [2024-10-07 05:42:45.049961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:41.175 [2024-10-07 05:42:45.050184] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:41.175 [2024-10-07 05:42:45.050263] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:23:41.175 [2024-10-07 05:42:45.050594] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:41.175 [2024-10-07 05:42:45.051083] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:41.175 [2024-10-07 05:42:45.051295] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:41.175 [2024-10-07 05:42:45.051566] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:41.175 [2024-10-07 05:42:45.051697] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:41.175 [2024-10-07 05:42:45.051804] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:41.175 [2024-10-07 05:42:45.051862] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b780 name raid_bdev1, state configuring 00:23:41.175 [2024-10-07 05:42:45.052026] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:41.175 pt3 00:23:41.175 05:42:45 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:23:41.175 05:42:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:41.175 05:42:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:41.175 05:42:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:41.175 05:42:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:41.175 05:42:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:41.175 05:42:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:41.175 05:42:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:41.175 05:42:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:41.175 05:42:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:41.175 05:42:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.175 05:42:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.433 05:42:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:41.433 "name": "raid_bdev1", 00:23:41.433 "uuid": "1b6d1dd7-81ea-4454-ac0b-6b45eff6f7c7", 00:23:41.433 "strip_size_kb": 64, 00:23:41.433 "state": "configuring", 00:23:41.433 "raid_level": "raid5f", 00:23:41.433 "superblock": true, 00:23:41.433 "num_base_bdevs": 3, 00:23:41.433 "num_base_bdevs_discovered": 1, 00:23:41.433 "num_base_bdevs_operational": 2, 00:23:41.433 "base_bdevs_list": [ 00:23:41.433 { 00:23:41.433 "name": null, 00:23:41.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.433 "is_configured": false, 00:23:41.433 "data_offset": 2048, 00:23:41.433 "data_size": 63488 00:23:41.433 }, 00:23:41.433 { 00:23:41.433 "name": null, 00:23:41.433 "uuid": "fc854b4a-5a6c-57fd-a2de-a4a0b033a424", 00:23:41.433 "is_configured": false, 00:23:41.433 "data_offset": 2048, 00:23:41.433 "data_size": 63488 00:23:41.433 }, 00:23:41.433 { 00:23:41.433 "name": "pt3", 00:23:41.433 "uuid": "609f6202-8c7b-5d73-9169-b90d5f492230", 00:23:41.433 "is_configured": true, 00:23:41.433 "data_offset": 2048, 00:23:41.433 "data_size": 63488 00:23:41.433 } 00:23:41.433 ] 00:23:41.433 }' 00:23:41.433 05:42:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:41.433 05:42:45 -- common/autotest_common.sh@10 -- # set +x 00:23:41.999 05:42:45 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:23:41.999 05:42:45 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:41.999 05:42:45 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:42.258 [2024-10-07 05:42:46.118149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:42.258 [2024-10-07 05:42:46.118388] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:42.258 [2024-10-07 05:42:46.118465] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:23:42.258 [2024-10-07 05:42:46.118788] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:42.258 [2024-10-07 05:42:46.119372] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:42.258 [2024-10-07 05:42:46.119557] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:42.258 [2024-10-07 05:42:46.119766] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:42.258 [2024-10-07 05:42:46.119929] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:42.258 [2024-10-07 05:42:46.120162] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:23:42.258 [2024-10-07 05:42:46.120287] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:42.258 [2024-10-07 05:42:46.120482] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:23:42.258 [2024-10-07 05:42:46.124515] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:23:42.258 [2024-10-07 05:42:46.124664] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:23:42.258 [2024-10-07 05:42:46.124986] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:42.258 pt2 00:23:42.258 05:42:46 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:23:42.258 05:42:46 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:42.258 05:42:46 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:42.258 05:42:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:42.258 05:42:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:42.258 05:42:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:42.258 05:42:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:42.258 05:42:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:42.258 05:42:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:42.258 05:42:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:42.258 05:42:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:42.258 05:42:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:42.258 05:42:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.258 05:42:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.517 05:42:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:42.517 "name": "raid_bdev1", 00:23:42.517 "uuid": "1b6d1dd7-81ea-4454-ac0b-6b45eff6f7c7", 00:23:42.517 "strip_size_kb": 64, 00:23:42.517 "state": "online", 00:23:42.517 "raid_level": "raid5f", 00:23:42.517 "superblock": true, 00:23:42.517 "num_base_bdevs": 3, 00:23:42.517 "num_base_bdevs_discovered": 2, 00:23:42.517 "num_base_bdevs_operational": 2, 00:23:42.517 "base_bdevs_list": [ 00:23:42.517 { 00:23:42.517 "name": null, 00:23:42.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.517 "is_configured": false, 00:23:42.517 "data_offset": 2048, 00:23:42.517 "data_size": 63488 00:23:42.517 }, 00:23:42.517 { 00:23:42.517 "name": "pt2", 00:23:42.517 "uuid": "fc854b4a-5a6c-57fd-a2de-a4a0b033a424", 00:23:42.517 "is_configured": true, 00:23:42.517 "data_offset": 2048, 00:23:42.517 "data_size": 63488 00:23:42.517 }, 00:23:42.517 { 00:23:42.517 "name": "pt3", 00:23:42.517 "uuid": "609f6202-8c7b-5d73-9169-b90d5f492230", 00:23:42.517 "is_configured": true, 00:23:42.517 "data_offset": 2048, 00:23:42.517 "data_size": 63488 00:23:42.517 } 00:23:42.517 ] 00:23:42.517 }' 00:23:42.517 05:42:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:42.517 05:42:46 -- common/autotest_common.sh@10 -- # set +x 00:23:43.083 05:42:46 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:43.083 05:42:46 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:23:43.342 [2024-10-07 05:42:47.205413] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:43.342 05:42:47 -- bdev/bdev_raid.sh@506 -- # '[' 1b6d1dd7-81ea-4454-ac0b-6b45eff6f7c7 '!=' 1b6d1dd7-81ea-4454-ac0b-6b45eff6f7c7 ']' 00:23:43.342 05:42:47 -- bdev/bdev_raid.sh@511 -- # killprocess 171403 00:23:43.342 05:42:47 -- common/autotest_common.sh@926 -- # '[' -z 171403 ']' 00:23:43.342 05:42:47 -- common/autotest_common.sh@930 -- # kill -0 171403 00:23:43.342 05:42:47 -- common/autotest_common.sh@931 -- # uname 00:23:43.342 05:42:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:43.342 05:42:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 171403 00:23:43.342 05:42:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:43.342 05:42:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:43.342 05:42:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 171403' 00:23:43.342 killing process with pid 171403 00:23:43.342 05:42:47 -- common/autotest_common.sh@945 -- # kill 171403 00:23:43.342 [2024-10-07 05:42:47.246181] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:43.342 05:42:47 -- common/autotest_common.sh@950 -- # wait 171403 00:23:43.342 [2024-10-07 05:42:47.246287] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:43.342 [2024-10-07 05:42:47.246345] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:43.342 [2024-10-07 05:42:47.246357] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:23:43.600 [2024-10-07 05:42:47.438019] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:44.534 05:42:48 -- bdev/bdev_raid.sh@513 -- # return 0 00:23:44.534 00:23:44.534 real 0m18.028s 00:23:44.534 user 0m33.000s 00:23:44.534 sys 0m2.205s 00:23:44.534 05:42:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:44.534 05:42:48 -- common/autotest_common.sh@10 -- # set +x 00:23:44.535 ************************************ 00:23:44.535 END TEST raid5f_superblock_test 00:23:44.535 ************************************ 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false 00:23:44.535 05:42:48 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:23:44.535 05:42:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:44.535 05:42:48 -- common/autotest_common.sh@10 -- # set +x 00:23:44.535 ************************************ 00:23:44.535 START TEST raid5f_rebuild_test 00:23:44.535 ************************************ 00:23:44.535 05:42:48 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 3 false false 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:44.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@544 -- # raid_pid=171995 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@545 -- # waitforlisten 171995 /var/tmp/spdk-raid.sock 00:23:44.535 05:42:48 -- common/autotest_common.sh@819 -- # '[' -z 171995 ']' 00:23:44.535 05:42:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:44.535 05:42:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:44.535 05:42:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:44.535 05:42:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:44.535 05:42:48 -- common/autotest_common.sh@10 -- # set +x 00:23:44.535 05:42:48 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:44.535 [2024-10-07 05:42:48.481328] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:23:44.535 [2024-10-07 05:42:48.481749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171995 ] 00:23:44.535 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:44.535 Zero copy mechanism will not be used. 00:23:44.793 [2024-10-07 05:42:48.649112] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.051 [2024-10-07 05:42:48.806041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.051 [2024-10-07 05:42:48.970116] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:45.618 05:42:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:45.618 05:42:49 -- common/autotest_common.sh@852 -- # return 0 00:23:45.618 05:42:49 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:45.618 05:42:49 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:45.618 05:42:49 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:45.876 BaseBdev1 00:23:45.876 05:42:49 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:45.876 05:42:49 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:45.876 05:42:49 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:46.135 BaseBdev2 00:23:46.135 05:42:49 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:46.135 05:42:49 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:46.135 05:42:49 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:46.393 BaseBdev3 00:23:46.393 05:42:50 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:46.650 spare_malloc 00:23:46.650 05:42:50 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:46.650 spare_delay 00:23:46.650 05:42:50 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:46.908 [2024-10-07 05:42:50.787303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:46.908 [2024-10-07 05:42:50.787578] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:46.908 [2024-10-07 05:42:50.787661] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:23:46.908 [2024-10-07 05:42:50.788004] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:46.908 [2024-10-07 05:42:50.790241] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:46.908 [2024-10-07 05:42:50.790428] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:46.908 spare 00:23:46.908 05:42:50 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:23:47.167 [2024-10-07 05:42:51.023390] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:47.167 [2024-10-07 05:42:51.025333] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:47.167 [2024-10-07 05:42:51.025514] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:47.167 [2024-10-07 05:42:51.025669] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:23:47.167 [2024-10-07 05:42:51.025731] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:23:47.167 [2024-10-07 05:42:51.025951] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:23:47.167 [2024-10-07 05:42:51.030378] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:23:47.167 [2024-10-07 05:42:51.030543] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:23:47.167 [2024-10-07 05:42:51.030873] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:47.167 05:42:51 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:47.167 05:42:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:47.167 05:42:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:47.167 05:42:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:47.167 05:42:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:47.167 05:42:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:47.167 05:42:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:47.167 05:42:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:47.167 05:42:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:47.167 05:42:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:47.167 05:42:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.167 05:42:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.425 05:42:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:47.425 "name": "raid_bdev1", 00:23:47.425 "uuid": "c9bbf310-f0ff-443c-91fd-f5ba0538a42b", 00:23:47.425 "strip_size_kb": 64, 00:23:47.425 "state": "online", 00:23:47.425 "raid_level": "raid5f", 00:23:47.425 "superblock": false, 00:23:47.425 "num_base_bdevs": 3, 00:23:47.425 "num_base_bdevs_discovered": 3, 00:23:47.425 "num_base_bdevs_operational": 3, 00:23:47.425 "base_bdevs_list": [ 00:23:47.425 { 00:23:47.425 "name": "BaseBdev1", 00:23:47.425 "uuid": "af8bd389-ac86-4e3a-add9-df25590136fa", 00:23:47.425 "is_configured": true, 00:23:47.425 "data_offset": 0, 00:23:47.425 "data_size": 65536 00:23:47.425 }, 00:23:47.425 { 00:23:47.425 "name": "BaseBdev2", 00:23:47.425 "uuid": "cead1b2e-c8ac-4b55-bd71-fc3f1f05a80a", 00:23:47.425 "is_configured": true, 00:23:47.425 "data_offset": 0, 00:23:47.425 "data_size": 65536 00:23:47.425 }, 00:23:47.425 { 00:23:47.425 "name": "BaseBdev3", 00:23:47.425 "uuid": "bd3f155b-ac05-4083-bb95-3df874156c6b", 00:23:47.425 "is_configured": true, 00:23:47.425 "data_offset": 0, 00:23:47.425 "data_size": 65536 00:23:47.425 } 00:23:47.425 ] 00:23:47.425 }' 00:23:47.425 05:42:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:47.425 05:42:51 -- common/autotest_common.sh@10 -- # set +x 00:23:47.990 05:42:51 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:47.990 05:42:51 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:48.247 [2024-10-07 05:42:51.999911] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:48.247 05:42:52 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=131072 00:23:48.247 05:42:52 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:48.247 05:42:52 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.247 05:42:52 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:23:48.247 05:42:52 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:23:48.247 05:42:52 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:23:48.247 05:42:52 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:48.247 05:42:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:48.247 05:42:52 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:48.247 05:42:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:48.247 05:42:52 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:48.247 05:42:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:48.247 05:42:52 -- bdev/nbd_common.sh@12 -- # local i 00:23:48.247 05:42:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:48.247 05:42:52 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:48.247 05:42:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:48.505 [2024-10-07 05:42:52.471927] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:23:48.764 /dev/nbd0 00:23:48.764 05:42:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:48.764 05:42:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:48.764 05:42:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:48.764 05:42:52 -- common/autotest_common.sh@857 -- # local i 00:23:48.764 05:42:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:48.764 05:42:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:48.764 05:42:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:48.764 05:42:52 -- common/autotest_common.sh@861 -- # break 00:23:48.764 05:42:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:48.764 05:42:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:48.764 05:42:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:48.764 1+0 records in 00:23:48.764 1+0 records out 00:23:48.764 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415184 s, 9.9 MB/s 00:23:48.764 05:42:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:48.764 05:42:52 -- common/autotest_common.sh@874 -- # size=4096 00:23:48.764 05:42:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:48.764 05:42:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:48.764 05:42:52 -- common/autotest_common.sh@877 -- # return 0 00:23:48.764 05:42:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:48.764 05:42:52 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:48.764 05:42:52 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:23:48.764 05:42:52 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:23:48.764 05:42:52 -- bdev/bdev_raid.sh@582 -- # echo 128 00:23:48.764 05:42:52 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:23:49.022 512+0 records in 00:23:49.023 512+0 records out 00:23:49.023 67108864 bytes (67 MB, 64 MiB) copied, 0.443212 s, 151 MB/s 00:23:49.281 05:42:53 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:49.281 05:42:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:49.281 05:42:53 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:49.281 05:42:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:49.281 05:42:53 -- bdev/nbd_common.sh@51 -- # local i 00:23:49.281 05:42:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:49.281 05:42:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:49.539 05:42:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:49.539 05:42:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:49.539 05:42:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:49.539 05:42:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:49.539 05:42:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:49.539 05:42:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:49.539 [2024-10-07 05:42:53.269948] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:49.539 05:42:53 -- bdev/nbd_common.sh@41 -- # break 00:23:49.539 05:42:53 -- bdev/nbd_common.sh@45 -- # return 0 00:23:49.539 05:42:53 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:49.539 [2024-10-07 05:42:53.514855] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:49.798 05:42:53 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:49.798 05:42:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:49.798 05:42:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:49.798 05:42:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:49.798 05:42:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:49.798 05:42:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:49.798 05:42:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:49.798 05:42:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:49.798 05:42:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:49.798 05:42:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:49.798 05:42:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.798 05:42:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.056 05:42:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:50.056 "name": "raid_bdev1", 00:23:50.056 "uuid": "c9bbf310-f0ff-443c-91fd-f5ba0538a42b", 00:23:50.056 "strip_size_kb": 64, 00:23:50.056 "state": "online", 00:23:50.056 "raid_level": "raid5f", 00:23:50.056 "superblock": false, 00:23:50.056 "num_base_bdevs": 3, 00:23:50.056 "num_base_bdevs_discovered": 2, 00:23:50.056 "num_base_bdevs_operational": 2, 00:23:50.056 "base_bdevs_list": [ 00:23:50.056 { 00:23:50.056 "name": null, 00:23:50.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:50.056 "is_configured": false, 00:23:50.056 "data_offset": 0, 00:23:50.056 "data_size": 65536 00:23:50.056 }, 00:23:50.056 { 00:23:50.056 "name": "BaseBdev2", 00:23:50.056 "uuid": "cead1b2e-c8ac-4b55-bd71-fc3f1f05a80a", 00:23:50.056 "is_configured": true, 00:23:50.056 "data_offset": 0, 00:23:50.056 "data_size": 65536 00:23:50.056 }, 00:23:50.056 { 00:23:50.056 "name": "BaseBdev3", 00:23:50.056 "uuid": "bd3f155b-ac05-4083-bb95-3df874156c6b", 00:23:50.056 "is_configured": true, 00:23:50.056 "data_offset": 0, 00:23:50.056 "data_size": 65536 00:23:50.056 } 00:23:50.056 ] 00:23:50.056 }' 00:23:50.056 05:42:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:50.056 05:42:53 -- common/autotest_common.sh@10 -- # set +x 00:23:50.684 05:42:54 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:50.684 [2024-10-07 05:42:54.591054] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:50.684 [2024-10-07 05:42:54.591091] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:50.684 [2024-10-07 05:42:54.601607] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b000 00:23:50.684 [2024-10-07 05:42:54.607073] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:50.684 05:42:54 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:52.058 05:42:55 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:52.058 05:42:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:52.058 05:42:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:52.058 05:42:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:52.058 05:42:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:52.058 05:42:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.058 05:42:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.058 05:42:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:52.058 "name": "raid_bdev1", 00:23:52.058 "uuid": "c9bbf310-f0ff-443c-91fd-f5ba0538a42b", 00:23:52.058 "strip_size_kb": 64, 00:23:52.058 "state": "online", 00:23:52.058 "raid_level": "raid5f", 00:23:52.058 "superblock": false, 00:23:52.058 "num_base_bdevs": 3, 00:23:52.058 "num_base_bdevs_discovered": 3, 00:23:52.058 "num_base_bdevs_operational": 3, 00:23:52.058 "process": { 00:23:52.058 "type": "rebuild", 00:23:52.058 "target": "spare", 00:23:52.058 "progress": { 00:23:52.058 "blocks": 24576, 00:23:52.058 "percent": 18 00:23:52.058 } 00:23:52.058 }, 00:23:52.058 "base_bdevs_list": [ 00:23:52.058 { 00:23:52.058 "name": "spare", 00:23:52.058 "uuid": "766b8ca6-0003-58a0-b2ed-104b31716dce", 00:23:52.058 "is_configured": true, 00:23:52.058 "data_offset": 0, 00:23:52.058 "data_size": 65536 00:23:52.058 }, 00:23:52.058 { 00:23:52.059 "name": "BaseBdev2", 00:23:52.059 "uuid": "cead1b2e-c8ac-4b55-bd71-fc3f1f05a80a", 00:23:52.059 "is_configured": true, 00:23:52.059 "data_offset": 0, 00:23:52.059 "data_size": 65536 00:23:52.059 }, 00:23:52.059 { 00:23:52.059 "name": "BaseBdev3", 00:23:52.059 "uuid": "bd3f155b-ac05-4083-bb95-3df874156c6b", 00:23:52.059 "is_configured": true, 00:23:52.059 "data_offset": 0, 00:23:52.059 "data_size": 65536 00:23:52.059 } 00:23:52.059 ] 00:23:52.059 }' 00:23:52.059 05:42:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:52.059 05:42:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:52.059 05:42:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:52.059 05:42:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:52.059 05:42:55 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:52.318 [2024-10-07 05:42:56.140026] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:52.318 [2024-10-07 05:42:56.219032] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:52.318 [2024-10-07 05:42:56.219111] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:52.318 05:42:56 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:52.318 05:42:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:52.318 05:42:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:52.318 05:42:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:52.318 05:42:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:52.318 05:42:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:52.318 05:42:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:52.318 05:42:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:52.318 05:42:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:52.318 05:42:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:52.318 05:42:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.318 05:42:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.576 05:42:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:52.576 "name": "raid_bdev1", 00:23:52.576 "uuid": "c9bbf310-f0ff-443c-91fd-f5ba0538a42b", 00:23:52.576 "strip_size_kb": 64, 00:23:52.576 "state": "online", 00:23:52.576 "raid_level": "raid5f", 00:23:52.576 "superblock": false, 00:23:52.576 "num_base_bdevs": 3, 00:23:52.576 "num_base_bdevs_discovered": 2, 00:23:52.576 "num_base_bdevs_operational": 2, 00:23:52.576 "base_bdevs_list": [ 00:23:52.576 { 00:23:52.576 "name": null, 00:23:52.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.576 "is_configured": false, 00:23:52.576 "data_offset": 0, 00:23:52.576 "data_size": 65536 00:23:52.576 }, 00:23:52.576 { 00:23:52.576 "name": "BaseBdev2", 00:23:52.576 "uuid": "cead1b2e-c8ac-4b55-bd71-fc3f1f05a80a", 00:23:52.576 "is_configured": true, 00:23:52.576 "data_offset": 0, 00:23:52.576 "data_size": 65536 00:23:52.576 }, 00:23:52.576 { 00:23:52.576 "name": "BaseBdev3", 00:23:52.576 "uuid": "bd3f155b-ac05-4083-bb95-3df874156c6b", 00:23:52.576 "is_configured": true, 00:23:52.576 "data_offset": 0, 00:23:52.576 "data_size": 65536 00:23:52.576 } 00:23:52.576 ] 00:23:52.576 }' 00:23:52.576 05:42:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:52.576 05:42:56 -- common/autotest_common.sh@10 -- # set +x 00:23:53.143 05:42:57 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:53.143 05:42:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:53.143 05:42:57 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:53.143 05:42:57 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:53.143 05:42:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:53.143 05:42:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.143 05:42:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.402 05:42:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:53.402 "name": "raid_bdev1", 00:23:53.402 "uuid": "c9bbf310-f0ff-443c-91fd-f5ba0538a42b", 00:23:53.402 "strip_size_kb": 64, 00:23:53.402 "state": "online", 00:23:53.402 "raid_level": "raid5f", 00:23:53.402 "superblock": false, 00:23:53.402 "num_base_bdevs": 3, 00:23:53.402 "num_base_bdevs_discovered": 2, 00:23:53.402 "num_base_bdevs_operational": 2, 00:23:53.402 "base_bdevs_list": [ 00:23:53.402 { 00:23:53.402 "name": null, 00:23:53.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.402 "is_configured": false, 00:23:53.402 "data_offset": 0, 00:23:53.402 "data_size": 65536 00:23:53.402 }, 00:23:53.402 { 00:23:53.402 "name": "BaseBdev2", 00:23:53.402 "uuid": "cead1b2e-c8ac-4b55-bd71-fc3f1f05a80a", 00:23:53.402 "is_configured": true, 00:23:53.402 "data_offset": 0, 00:23:53.402 "data_size": 65536 00:23:53.402 }, 00:23:53.402 { 00:23:53.402 "name": "BaseBdev3", 00:23:53.402 "uuid": "bd3f155b-ac05-4083-bb95-3df874156c6b", 00:23:53.402 "is_configured": true, 00:23:53.402 "data_offset": 0, 00:23:53.402 "data_size": 65536 00:23:53.402 } 00:23:53.402 ] 00:23:53.402 }' 00:23:53.402 05:42:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:53.402 05:42:57 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:53.402 05:42:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:53.402 05:42:57 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:53.402 05:42:57 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:53.660 [2024-10-07 05:42:57.618767] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:53.660 [2024-10-07 05:42:57.618819] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:53.660 [2024-10-07 05:42:57.628402] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:23:53.660 [2024-10-07 05:42:57.633897] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:53.919 05:42:57 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:54.855 05:42:58 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:54.855 05:42:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:54.855 05:42:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:54.855 05:42:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:54.855 05:42:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:54.855 05:42:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.855 05:42:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.114 05:42:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:55.114 "name": "raid_bdev1", 00:23:55.114 "uuid": "c9bbf310-f0ff-443c-91fd-f5ba0538a42b", 00:23:55.114 "strip_size_kb": 64, 00:23:55.114 "state": "online", 00:23:55.114 "raid_level": "raid5f", 00:23:55.114 "superblock": false, 00:23:55.114 "num_base_bdevs": 3, 00:23:55.114 "num_base_bdevs_discovered": 3, 00:23:55.114 "num_base_bdevs_operational": 3, 00:23:55.114 "process": { 00:23:55.114 "type": "rebuild", 00:23:55.114 "target": "spare", 00:23:55.114 "progress": { 00:23:55.114 "blocks": 24576, 00:23:55.114 "percent": 18 00:23:55.114 } 00:23:55.114 }, 00:23:55.114 "base_bdevs_list": [ 00:23:55.114 { 00:23:55.114 "name": "spare", 00:23:55.114 "uuid": "766b8ca6-0003-58a0-b2ed-104b31716dce", 00:23:55.114 "is_configured": true, 00:23:55.114 "data_offset": 0, 00:23:55.114 "data_size": 65536 00:23:55.114 }, 00:23:55.114 { 00:23:55.114 "name": "BaseBdev2", 00:23:55.114 "uuid": "cead1b2e-c8ac-4b55-bd71-fc3f1f05a80a", 00:23:55.114 "is_configured": true, 00:23:55.114 "data_offset": 0, 00:23:55.114 "data_size": 65536 00:23:55.114 }, 00:23:55.114 { 00:23:55.114 "name": "BaseBdev3", 00:23:55.114 "uuid": "bd3f155b-ac05-4083-bb95-3df874156c6b", 00:23:55.114 "is_configured": true, 00:23:55.114 "data_offset": 0, 00:23:55.114 "data_size": 65536 00:23:55.114 } 00:23:55.114 ] 00:23:55.114 }' 00:23:55.114 05:42:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:55.114 05:42:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:55.114 05:42:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:55.114 05:42:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:55.114 05:42:58 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:23:55.114 05:42:58 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:23:55.114 05:42:58 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:23:55.114 05:42:58 -- bdev/bdev_raid.sh@657 -- # local timeout=618 00:23:55.114 05:42:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:55.114 05:42:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:55.114 05:42:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:55.114 05:42:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:55.114 05:42:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:55.114 05:42:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:55.114 05:42:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.114 05:42:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.373 05:42:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:55.373 "name": "raid_bdev1", 00:23:55.373 "uuid": "c9bbf310-f0ff-443c-91fd-f5ba0538a42b", 00:23:55.373 "strip_size_kb": 64, 00:23:55.373 "state": "online", 00:23:55.373 "raid_level": "raid5f", 00:23:55.373 "superblock": false, 00:23:55.373 "num_base_bdevs": 3, 00:23:55.373 "num_base_bdevs_discovered": 3, 00:23:55.373 "num_base_bdevs_operational": 3, 00:23:55.373 "process": { 00:23:55.373 "type": "rebuild", 00:23:55.373 "target": "spare", 00:23:55.373 "progress": { 00:23:55.373 "blocks": 30720, 00:23:55.373 "percent": 23 00:23:55.373 } 00:23:55.373 }, 00:23:55.373 "base_bdevs_list": [ 00:23:55.373 { 00:23:55.373 "name": "spare", 00:23:55.373 "uuid": "766b8ca6-0003-58a0-b2ed-104b31716dce", 00:23:55.373 "is_configured": true, 00:23:55.373 "data_offset": 0, 00:23:55.373 "data_size": 65536 00:23:55.373 }, 00:23:55.373 { 00:23:55.373 "name": "BaseBdev2", 00:23:55.373 "uuid": "cead1b2e-c8ac-4b55-bd71-fc3f1f05a80a", 00:23:55.373 "is_configured": true, 00:23:55.373 "data_offset": 0, 00:23:55.373 "data_size": 65536 00:23:55.373 }, 00:23:55.373 { 00:23:55.373 "name": "BaseBdev3", 00:23:55.373 "uuid": "bd3f155b-ac05-4083-bb95-3df874156c6b", 00:23:55.373 "is_configured": true, 00:23:55.373 "data_offset": 0, 00:23:55.373 "data_size": 65536 00:23:55.373 } 00:23:55.373 ] 00:23:55.373 }' 00:23:55.373 05:42:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:55.373 05:42:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:55.373 05:42:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:55.373 05:42:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:55.373 05:42:59 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:56.753 05:43:00 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:56.753 05:43:00 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:56.753 05:43:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:56.753 05:43:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:56.753 05:43:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:56.753 05:43:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:56.753 05:43:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.753 05:43:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:56.753 05:43:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:56.753 "name": "raid_bdev1", 00:23:56.753 "uuid": "c9bbf310-f0ff-443c-91fd-f5ba0538a42b", 00:23:56.753 "strip_size_kb": 64, 00:23:56.753 "state": "online", 00:23:56.753 "raid_level": "raid5f", 00:23:56.753 "superblock": false, 00:23:56.753 "num_base_bdevs": 3, 00:23:56.753 "num_base_bdevs_discovered": 3, 00:23:56.753 "num_base_bdevs_operational": 3, 00:23:56.753 "process": { 00:23:56.753 "type": "rebuild", 00:23:56.753 "target": "spare", 00:23:56.753 "progress": { 00:23:56.753 "blocks": 57344, 00:23:56.753 "percent": 43 00:23:56.753 } 00:23:56.753 }, 00:23:56.753 "base_bdevs_list": [ 00:23:56.753 { 00:23:56.753 "name": "spare", 00:23:56.753 "uuid": "766b8ca6-0003-58a0-b2ed-104b31716dce", 00:23:56.753 "is_configured": true, 00:23:56.753 "data_offset": 0, 00:23:56.753 "data_size": 65536 00:23:56.753 }, 00:23:56.753 { 00:23:56.753 "name": "BaseBdev2", 00:23:56.753 "uuid": "cead1b2e-c8ac-4b55-bd71-fc3f1f05a80a", 00:23:56.753 "is_configured": true, 00:23:56.753 "data_offset": 0, 00:23:56.753 "data_size": 65536 00:23:56.753 }, 00:23:56.753 { 00:23:56.753 "name": "BaseBdev3", 00:23:56.753 "uuid": "bd3f155b-ac05-4083-bb95-3df874156c6b", 00:23:56.753 "is_configured": true, 00:23:56.753 "data_offset": 0, 00:23:56.753 "data_size": 65536 00:23:56.753 } 00:23:56.753 ] 00:23:56.753 }' 00:23:56.753 05:43:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:56.753 05:43:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:56.753 05:43:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:56.753 05:43:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:56.753 05:43:00 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:57.689 05:43:01 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:57.689 05:43:01 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:57.689 05:43:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:57.689 05:43:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:57.690 05:43:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:57.690 05:43:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:57.948 05:43:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.948 05:43:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.948 05:43:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:57.948 "name": "raid_bdev1", 00:23:57.948 "uuid": "c9bbf310-f0ff-443c-91fd-f5ba0538a42b", 00:23:57.948 "strip_size_kb": 64, 00:23:57.948 "state": "online", 00:23:57.948 "raid_level": "raid5f", 00:23:57.948 "superblock": false, 00:23:57.948 "num_base_bdevs": 3, 00:23:57.948 "num_base_bdevs_discovered": 3, 00:23:57.948 "num_base_bdevs_operational": 3, 00:23:57.948 "process": { 00:23:57.948 "type": "rebuild", 00:23:57.948 "target": "spare", 00:23:57.948 "progress": { 00:23:57.948 "blocks": 86016, 00:23:57.948 "percent": 65 00:23:57.948 } 00:23:57.948 }, 00:23:57.948 "base_bdevs_list": [ 00:23:57.948 { 00:23:57.948 "name": "spare", 00:23:57.948 "uuid": "766b8ca6-0003-58a0-b2ed-104b31716dce", 00:23:57.948 "is_configured": true, 00:23:57.948 "data_offset": 0, 00:23:57.948 "data_size": 65536 00:23:57.948 }, 00:23:57.948 { 00:23:57.948 "name": "BaseBdev2", 00:23:57.948 "uuid": "cead1b2e-c8ac-4b55-bd71-fc3f1f05a80a", 00:23:57.948 "is_configured": true, 00:23:57.948 "data_offset": 0, 00:23:57.948 "data_size": 65536 00:23:57.948 }, 00:23:57.948 { 00:23:57.948 "name": "BaseBdev3", 00:23:57.948 "uuid": "bd3f155b-ac05-4083-bb95-3df874156c6b", 00:23:57.948 "is_configured": true, 00:23:57.948 "data_offset": 0, 00:23:57.948 "data_size": 65536 00:23:57.948 } 00:23:57.948 ] 00:23:57.948 }' 00:23:57.948 05:43:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:58.207 05:43:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:58.207 05:43:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:58.207 05:43:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:58.207 05:43:01 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:59.143 05:43:02 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:59.143 05:43:02 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:59.143 05:43:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:59.143 05:43:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:59.143 05:43:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:59.143 05:43:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:59.143 05:43:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.143 05:43:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.402 05:43:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:59.402 "name": "raid_bdev1", 00:23:59.402 "uuid": "c9bbf310-f0ff-443c-91fd-f5ba0538a42b", 00:23:59.402 "strip_size_kb": 64, 00:23:59.402 "state": "online", 00:23:59.402 "raid_level": "raid5f", 00:23:59.402 "superblock": false, 00:23:59.402 "num_base_bdevs": 3, 00:23:59.402 "num_base_bdevs_discovered": 3, 00:23:59.402 "num_base_bdevs_operational": 3, 00:23:59.402 "process": { 00:23:59.402 "type": "rebuild", 00:23:59.402 "target": "spare", 00:23:59.402 "progress": { 00:23:59.402 "blocks": 112640, 00:23:59.402 "percent": 85 00:23:59.402 } 00:23:59.402 }, 00:23:59.402 "base_bdevs_list": [ 00:23:59.402 { 00:23:59.402 "name": "spare", 00:23:59.402 "uuid": "766b8ca6-0003-58a0-b2ed-104b31716dce", 00:23:59.402 "is_configured": true, 00:23:59.402 "data_offset": 0, 00:23:59.402 "data_size": 65536 00:23:59.402 }, 00:23:59.402 { 00:23:59.402 "name": "BaseBdev2", 00:23:59.402 "uuid": "cead1b2e-c8ac-4b55-bd71-fc3f1f05a80a", 00:23:59.402 "is_configured": true, 00:23:59.402 "data_offset": 0, 00:23:59.402 "data_size": 65536 00:23:59.402 }, 00:23:59.402 { 00:23:59.402 "name": "BaseBdev3", 00:23:59.402 "uuid": "bd3f155b-ac05-4083-bb95-3df874156c6b", 00:23:59.402 "is_configured": true, 00:23:59.402 "data_offset": 0, 00:23:59.402 "data_size": 65536 00:23:59.402 } 00:23:59.402 ] 00:23:59.402 }' 00:23:59.403 05:43:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:59.403 05:43:03 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:59.403 05:43:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:59.403 05:43:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:59.403 05:43:03 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:00.339 [2024-10-07 05:43:04.086928] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:00.339 [2024-10-07 05:43:04.086998] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:00.339 [2024-10-07 05:43:04.087072] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:00.339 05:43:04 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:00.339 05:43:04 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:00.339 05:43:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:00.339 05:43:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:00.339 05:43:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:00.339 05:43:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:00.597 05:43:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.597 05:43:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.856 05:43:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:00.856 "name": "raid_bdev1", 00:24:00.856 "uuid": "c9bbf310-f0ff-443c-91fd-f5ba0538a42b", 00:24:00.856 "strip_size_kb": 64, 00:24:00.856 "state": "online", 00:24:00.856 "raid_level": "raid5f", 00:24:00.856 "superblock": false, 00:24:00.856 "num_base_bdevs": 3, 00:24:00.856 "num_base_bdevs_discovered": 3, 00:24:00.856 "num_base_bdevs_operational": 3, 00:24:00.856 "base_bdevs_list": [ 00:24:00.856 { 00:24:00.856 "name": "spare", 00:24:00.856 "uuid": "766b8ca6-0003-58a0-b2ed-104b31716dce", 00:24:00.856 "is_configured": true, 00:24:00.856 "data_offset": 0, 00:24:00.856 "data_size": 65536 00:24:00.856 }, 00:24:00.856 { 00:24:00.856 "name": "BaseBdev2", 00:24:00.856 "uuid": "cead1b2e-c8ac-4b55-bd71-fc3f1f05a80a", 00:24:00.856 "is_configured": true, 00:24:00.856 "data_offset": 0, 00:24:00.856 "data_size": 65536 00:24:00.856 }, 00:24:00.856 { 00:24:00.856 "name": "BaseBdev3", 00:24:00.856 "uuid": "bd3f155b-ac05-4083-bb95-3df874156c6b", 00:24:00.856 "is_configured": true, 00:24:00.856 "data_offset": 0, 00:24:00.856 "data_size": 65536 00:24:00.856 } 00:24:00.856 ] 00:24:00.856 }' 00:24:00.856 05:43:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:00.856 05:43:04 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:00.856 05:43:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:00.856 05:43:04 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:00.856 05:43:04 -- bdev/bdev_raid.sh@660 -- # break 00:24:00.856 05:43:04 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:00.856 05:43:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:00.856 05:43:04 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:00.856 05:43:04 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:00.856 05:43:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:00.856 05:43:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.856 05:43:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.115 05:43:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:01.115 "name": "raid_bdev1", 00:24:01.115 "uuid": "c9bbf310-f0ff-443c-91fd-f5ba0538a42b", 00:24:01.115 "strip_size_kb": 64, 00:24:01.115 "state": "online", 00:24:01.115 "raid_level": "raid5f", 00:24:01.115 "superblock": false, 00:24:01.115 "num_base_bdevs": 3, 00:24:01.115 "num_base_bdevs_discovered": 3, 00:24:01.115 "num_base_bdevs_operational": 3, 00:24:01.115 "base_bdevs_list": [ 00:24:01.115 { 00:24:01.115 "name": "spare", 00:24:01.115 "uuid": "766b8ca6-0003-58a0-b2ed-104b31716dce", 00:24:01.115 "is_configured": true, 00:24:01.115 "data_offset": 0, 00:24:01.115 "data_size": 65536 00:24:01.115 }, 00:24:01.115 { 00:24:01.115 "name": "BaseBdev2", 00:24:01.115 "uuid": "cead1b2e-c8ac-4b55-bd71-fc3f1f05a80a", 00:24:01.115 "is_configured": true, 00:24:01.115 "data_offset": 0, 00:24:01.115 "data_size": 65536 00:24:01.115 }, 00:24:01.115 { 00:24:01.115 "name": "BaseBdev3", 00:24:01.115 "uuid": "bd3f155b-ac05-4083-bb95-3df874156c6b", 00:24:01.115 "is_configured": true, 00:24:01.115 "data_offset": 0, 00:24:01.115 "data_size": 65536 00:24:01.115 } 00:24:01.115 ] 00:24:01.115 }' 00:24:01.115 05:43:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:01.115 05:43:04 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:01.115 05:43:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:01.115 05:43:05 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:01.115 05:43:05 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:01.115 05:43:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:01.115 05:43:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:01.115 05:43:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:01.115 05:43:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:01.115 05:43:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:01.115 05:43:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:01.115 05:43:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:01.115 05:43:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:01.115 05:43:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:01.115 05:43:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.115 05:43:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.373 05:43:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:01.373 "name": "raid_bdev1", 00:24:01.373 "uuid": "c9bbf310-f0ff-443c-91fd-f5ba0538a42b", 00:24:01.373 "strip_size_kb": 64, 00:24:01.373 "state": "online", 00:24:01.373 "raid_level": "raid5f", 00:24:01.373 "superblock": false, 00:24:01.373 "num_base_bdevs": 3, 00:24:01.373 "num_base_bdevs_discovered": 3, 00:24:01.373 "num_base_bdevs_operational": 3, 00:24:01.373 "base_bdevs_list": [ 00:24:01.373 { 00:24:01.373 "name": "spare", 00:24:01.373 "uuid": "766b8ca6-0003-58a0-b2ed-104b31716dce", 00:24:01.373 "is_configured": true, 00:24:01.373 "data_offset": 0, 00:24:01.373 "data_size": 65536 00:24:01.373 }, 00:24:01.373 { 00:24:01.373 "name": "BaseBdev2", 00:24:01.373 "uuid": "cead1b2e-c8ac-4b55-bd71-fc3f1f05a80a", 00:24:01.373 "is_configured": true, 00:24:01.373 "data_offset": 0, 00:24:01.373 "data_size": 65536 00:24:01.373 }, 00:24:01.373 { 00:24:01.373 "name": "BaseBdev3", 00:24:01.373 "uuid": "bd3f155b-ac05-4083-bb95-3df874156c6b", 00:24:01.373 "is_configured": true, 00:24:01.373 "data_offset": 0, 00:24:01.373 "data_size": 65536 00:24:01.373 } 00:24:01.373 ] 00:24:01.373 }' 00:24:01.373 05:43:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:01.373 05:43:05 -- common/autotest_common.sh@10 -- # set +x 00:24:01.972 05:43:05 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:02.230 [2024-10-07 05:43:06.110373] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:02.230 [2024-10-07 05:43:06.110418] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:02.230 [2024-10-07 05:43:06.110538] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:02.230 [2024-10-07 05:43:06.110624] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:02.230 [2024-10-07 05:43:06.110637] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:24:02.230 05:43:06 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:02.230 05:43:06 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.487 05:43:06 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:02.487 05:43:06 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:24:02.487 05:43:06 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:02.487 05:43:06 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:02.487 05:43:06 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:02.487 05:43:06 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:02.488 05:43:06 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:02.488 05:43:06 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:02.488 05:43:06 -- bdev/nbd_common.sh@12 -- # local i 00:24:02.488 05:43:06 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:02.488 05:43:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:02.488 05:43:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:02.746 /dev/nbd0 00:24:02.746 05:43:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:02.746 05:43:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:02.746 05:43:06 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:02.746 05:43:06 -- common/autotest_common.sh@857 -- # local i 00:24:02.746 05:43:06 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:02.746 05:43:06 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:02.746 05:43:06 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:02.746 05:43:06 -- common/autotest_common.sh@861 -- # break 00:24:02.746 05:43:06 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:02.746 05:43:06 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:02.746 05:43:06 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:02.746 1+0 records in 00:24:02.746 1+0 records out 00:24:02.746 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588029 s, 7.0 MB/s 00:24:02.746 05:43:06 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:02.746 05:43:06 -- common/autotest_common.sh@874 -- # size=4096 00:24:02.746 05:43:06 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:02.746 05:43:06 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:02.746 05:43:06 -- common/autotest_common.sh@877 -- # return 0 00:24:02.746 05:43:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:02.746 05:43:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:02.746 05:43:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:03.004 /dev/nbd1 00:24:03.004 05:43:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:03.004 05:43:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:03.004 05:43:06 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:24:03.004 05:43:06 -- common/autotest_common.sh@857 -- # local i 00:24:03.004 05:43:06 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:03.004 05:43:06 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:03.004 05:43:06 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:24:03.004 05:43:06 -- common/autotest_common.sh@861 -- # break 00:24:03.004 05:43:06 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:03.004 05:43:06 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:03.004 05:43:06 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:03.004 1+0 records in 00:24:03.004 1+0 records out 00:24:03.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000602341 s, 6.8 MB/s 00:24:03.004 05:43:06 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:03.004 05:43:06 -- common/autotest_common.sh@874 -- # size=4096 00:24:03.004 05:43:06 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:03.004 05:43:06 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:03.004 05:43:06 -- common/autotest_common.sh@877 -- # return 0 00:24:03.004 05:43:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:03.004 05:43:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:03.004 05:43:06 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:03.262 05:43:07 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:03.262 05:43:07 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:03.262 05:43:07 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:03.262 05:43:07 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:03.262 05:43:07 -- bdev/nbd_common.sh@51 -- # local i 00:24:03.262 05:43:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:03.262 05:43:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:03.520 05:43:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:03.520 05:43:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:03.520 05:43:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:03.520 05:43:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:03.520 05:43:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:03.520 05:43:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:03.520 05:43:07 -- bdev/nbd_common.sh@41 -- # break 00:24:03.520 05:43:07 -- bdev/nbd_common.sh@45 -- # return 0 00:24:03.520 05:43:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:03.520 05:43:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:03.777 05:43:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:03.777 05:43:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:03.777 05:43:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:03.777 05:43:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:03.777 05:43:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:03.777 05:43:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:03.777 05:43:07 -- bdev/nbd_common.sh@41 -- # break 00:24:03.777 05:43:07 -- bdev/nbd_common.sh@45 -- # return 0 00:24:03.777 05:43:07 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:24:03.777 05:43:07 -- bdev/bdev_raid.sh@709 -- # killprocess 171995 00:24:03.777 05:43:07 -- common/autotest_common.sh@926 -- # '[' -z 171995 ']' 00:24:03.777 05:43:07 -- common/autotest_common.sh@930 -- # kill -0 171995 00:24:03.777 05:43:07 -- common/autotest_common.sh@931 -- # uname 00:24:03.777 05:43:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:03.777 05:43:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 171995 00:24:03.777 05:43:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:03.777 05:43:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:03.777 05:43:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 171995' 00:24:03.777 killing process with pid 171995 00:24:03.777 05:43:07 -- common/autotest_common.sh@945 -- # kill 171995 00:24:03.777 Received shutdown signal, test time was about 60.000000 seconds 00:24:03.777 00:24:03.777 Latency(us) 00:24:03.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.777 =================================================================================================================== 00:24:03.777 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:03.777 [2024-10-07 05:43:07.594610] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:03.777 05:43:07 -- common/autotest_common.sh@950 -- # wait 171995 00:24:04.035 [2024-10-07 05:43:07.865032] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:04.970 05:43:08 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:04.970 00:24:04.970 real 0m20.482s 00:24:04.970 user 0m30.667s 00:24:04.970 sys 0m2.330s 00:24:04.970 05:43:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:04.970 05:43:08 -- common/autotest_common.sh@10 -- # set +x 00:24:04.970 ************************************ 00:24:04.970 END TEST raid5f_rebuild_test 00:24:04.970 ************************************ 00:24:04.970 05:43:08 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false 00:24:04.970 05:43:08 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:24:04.970 05:43:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:04.970 05:43:08 -- common/autotest_common.sh@10 -- # set +x 00:24:05.230 ************************************ 00:24:05.230 START TEST raid5f_rebuild_test_sb 00:24:05.230 ************************************ 00:24:05.230 05:43:08 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 3 true false 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@544 -- # raid_pid=172532 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@545 -- # waitforlisten 172532 /var/tmp/spdk-raid.sock 00:24:05.230 05:43:08 -- common/autotest_common.sh@819 -- # '[' -z 172532 ']' 00:24:05.230 05:43:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:05.230 05:43:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:05.230 05:43:08 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:05.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:05.230 05:43:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:05.230 05:43:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:05.230 05:43:08 -- common/autotest_common.sh@10 -- # set +x 00:24:05.230 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:05.230 Zero copy mechanism will not be used. 00:24:05.230 [2024-10-07 05:43:09.030847] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:05.230 [2024-10-07 05:43:09.031061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172532 ] 00:24:05.230 [2024-10-07 05:43:09.201245] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.490 [2024-10-07 05:43:09.387652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.749 [2024-10-07 05:43:09.574062] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:06.025 05:43:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:06.025 05:43:09 -- common/autotest_common.sh@852 -- # return 0 00:24:06.025 05:43:09 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:06.025 05:43:09 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:06.025 05:43:09 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:06.295 BaseBdev1_malloc 00:24:06.295 05:43:10 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:06.553 [2024-10-07 05:43:10.361155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:06.553 [2024-10-07 05:43:10.361267] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:06.553 [2024-10-07 05:43:10.361305] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:24:06.553 [2024-10-07 05:43:10.361358] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:06.553 [2024-10-07 05:43:10.363895] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:06.553 [2024-10-07 05:43:10.363945] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:06.553 BaseBdev1 00:24:06.553 05:43:10 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:06.553 05:43:10 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:06.554 05:43:10 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:06.812 BaseBdev2_malloc 00:24:06.812 05:43:10 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:07.071 [2024-10-07 05:43:10.881709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:07.071 [2024-10-07 05:43:10.881805] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:07.071 [2024-10-07 05:43:10.881852] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:24:07.071 [2024-10-07 05:43:10.881907] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:07.071 [2024-10-07 05:43:10.884331] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:07.071 [2024-10-07 05:43:10.884380] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:07.071 BaseBdev2 00:24:07.071 05:43:10 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:07.071 05:43:10 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:07.071 05:43:10 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:07.330 BaseBdev3_malloc 00:24:07.330 05:43:11 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:07.589 [2024-10-07 05:43:11.346872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:07.589 [2024-10-07 05:43:11.346949] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:07.589 [2024-10-07 05:43:11.346989] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:24:07.589 [2024-10-07 05:43:11.347034] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:07.589 [2024-10-07 05:43:11.349378] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:07.589 [2024-10-07 05:43:11.349432] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:07.589 BaseBdev3 00:24:07.589 05:43:11 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:07.848 spare_malloc 00:24:07.848 05:43:11 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:07.848 spare_delay 00:24:07.848 05:43:11 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:08.107 [2024-10-07 05:43:11.948915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:08.107 [2024-10-07 05:43:11.949004] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:08.107 [2024-10-07 05:43:11.949040] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:24:08.107 [2024-10-07 05:43:11.949086] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:08.107 [2024-10-07 05:43:11.951536] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:08.107 [2024-10-07 05:43:11.951592] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:08.107 spare 00:24:08.107 05:43:11 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:24:08.366 [2024-10-07 05:43:12.125048] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:08.366 [2024-10-07 05:43:12.127028] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:08.366 [2024-10-07 05:43:12.127099] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:08.366 [2024-10-07 05:43:12.127312] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:24:08.366 [2024-10-07 05:43:12.127327] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:08.366 [2024-10-07 05:43:12.127436] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:24:08.366 [2024-10-07 05:43:12.131726] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:24:08.366 [2024-10-07 05:43:12.131750] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:24:08.366 [2024-10-07 05:43:12.131903] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:08.366 05:43:12 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:08.366 05:43:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:08.366 05:43:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:08.366 05:43:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:08.366 05:43:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:08.366 05:43:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:08.366 05:43:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:08.366 05:43:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:08.366 05:43:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:08.366 05:43:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:08.366 05:43:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.366 05:43:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.367 05:43:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:08.367 "name": "raid_bdev1", 00:24:08.367 "uuid": "f5fac81b-68a2-4b01-8938-c69e3cc8ce22", 00:24:08.367 "strip_size_kb": 64, 00:24:08.367 "state": "online", 00:24:08.367 "raid_level": "raid5f", 00:24:08.367 "superblock": true, 00:24:08.367 "num_base_bdevs": 3, 00:24:08.367 "num_base_bdevs_discovered": 3, 00:24:08.367 "num_base_bdevs_operational": 3, 00:24:08.367 "base_bdevs_list": [ 00:24:08.367 { 00:24:08.367 "name": "BaseBdev1", 00:24:08.367 "uuid": "43fc664f-aa0a-55cc-b10e-c9b77774ceeb", 00:24:08.367 "is_configured": true, 00:24:08.367 "data_offset": 2048, 00:24:08.367 "data_size": 63488 00:24:08.367 }, 00:24:08.367 { 00:24:08.367 "name": "BaseBdev2", 00:24:08.367 "uuid": "c5729fdb-27ff-5cef-a5a9-269311f78ae0", 00:24:08.367 "is_configured": true, 00:24:08.367 "data_offset": 2048, 00:24:08.367 "data_size": 63488 00:24:08.367 }, 00:24:08.367 { 00:24:08.367 "name": "BaseBdev3", 00:24:08.367 "uuid": "55560c73-2452-5fc6-9efe-89b703d98e0c", 00:24:08.367 "is_configured": true, 00:24:08.367 "data_offset": 2048, 00:24:08.367 "data_size": 63488 00:24:08.367 } 00:24:08.367 ] 00:24:08.367 }' 00:24:08.367 05:43:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:08.367 05:43:12 -- common/autotest_common.sh@10 -- # set +x 00:24:09.304 05:43:12 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:24:09.304 05:43:12 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:09.304 [2024-10-07 05:43:13.193226] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:09.304 05:43:13 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=126976 00:24:09.304 05:43:13 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.304 05:43:13 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:09.564 05:43:13 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:24:09.564 05:43:13 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:24:09.564 05:43:13 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:24:09.564 05:43:13 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:09.564 05:43:13 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:09.564 05:43:13 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:09.564 05:43:13 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:09.564 05:43:13 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:09.564 05:43:13 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:09.564 05:43:13 -- bdev/nbd_common.sh@12 -- # local i 00:24:09.564 05:43:13 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:09.564 05:43:13 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:09.564 05:43:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:09.822 [2024-10-07 05:43:13.573195] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:24:09.822 /dev/nbd0 00:24:09.822 05:43:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:09.822 05:43:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:09.822 05:43:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:09.822 05:43:13 -- common/autotest_common.sh@857 -- # local i 00:24:09.822 05:43:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:09.822 05:43:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:09.822 05:43:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:09.822 05:43:13 -- common/autotest_common.sh@861 -- # break 00:24:09.822 05:43:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:09.822 05:43:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:09.822 05:43:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:09.822 1+0 records in 00:24:09.822 1+0 records out 00:24:09.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246014 s, 16.6 MB/s 00:24:09.822 05:43:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:09.822 05:43:13 -- common/autotest_common.sh@874 -- # size=4096 00:24:09.822 05:43:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:09.822 05:43:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:09.822 05:43:13 -- common/autotest_common.sh@877 -- # return 0 00:24:09.822 05:43:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:09.822 05:43:13 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:09.822 05:43:13 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:24:09.822 05:43:13 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:24:09.822 05:43:13 -- bdev/bdev_raid.sh@582 -- # echo 128 00:24:09.822 05:43:13 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:24:10.081 496+0 records in 00:24:10.081 496+0 records out 00:24:10.081 65011712 bytes (65 MB, 62 MiB) copied, 0.415898 s, 156 MB/s 00:24:10.081 05:43:14 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:10.081 05:43:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:10.081 05:43:14 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:10.081 05:43:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:10.081 05:43:14 -- bdev/nbd_common.sh@51 -- # local i 00:24:10.081 05:43:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:10.081 05:43:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:10.649 05:43:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:10.649 05:43:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:10.649 05:43:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:10.649 05:43:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:10.649 05:43:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:10.649 05:43:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:10.649 [2024-10-07 05:43:14.332342] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:10.649 05:43:14 -- bdev/nbd_common.sh@41 -- # break 00:24:10.649 05:43:14 -- bdev/nbd_common.sh@45 -- # return 0 00:24:10.649 05:43:14 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:10.649 [2024-10-07 05:43:14.570409] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:10.649 05:43:14 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:10.649 05:43:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:10.649 05:43:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:10.649 05:43:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:10.649 05:43:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:10.649 05:43:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:10.649 05:43:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:10.649 05:43:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:10.649 05:43:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:10.649 05:43:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:10.649 05:43:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.649 05:43:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.907 05:43:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:10.907 "name": "raid_bdev1", 00:24:10.907 "uuid": "f5fac81b-68a2-4b01-8938-c69e3cc8ce22", 00:24:10.907 "strip_size_kb": 64, 00:24:10.907 "state": "online", 00:24:10.907 "raid_level": "raid5f", 00:24:10.907 "superblock": true, 00:24:10.908 "num_base_bdevs": 3, 00:24:10.908 "num_base_bdevs_discovered": 2, 00:24:10.908 "num_base_bdevs_operational": 2, 00:24:10.908 "base_bdevs_list": [ 00:24:10.908 { 00:24:10.908 "name": null, 00:24:10.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:10.908 "is_configured": false, 00:24:10.908 "data_offset": 2048, 00:24:10.908 "data_size": 63488 00:24:10.908 }, 00:24:10.908 { 00:24:10.908 "name": "BaseBdev2", 00:24:10.908 "uuid": "c5729fdb-27ff-5cef-a5a9-269311f78ae0", 00:24:10.908 "is_configured": true, 00:24:10.908 "data_offset": 2048, 00:24:10.908 "data_size": 63488 00:24:10.908 }, 00:24:10.908 { 00:24:10.908 "name": "BaseBdev3", 00:24:10.908 "uuid": "55560c73-2452-5fc6-9efe-89b703d98e0c", 00:24:10.908 "is_configured": true, 00:24:10.908 "data_offset": 2048, 00:24:10.908 "data_size": 63488 00:24:10.908 } 00:24:10.908 ] 00:24:10.908 }' 00:24:10.908 05:43:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:10.908 05:43:14 -- common/autotest_common.sh@10 -- # set +x 00:24:11.475 05:43:15 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:11.734 [2024-10-07 05:43:15.694655] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:11.734 [2024-10-07 05:43:15.694697] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:11.734 [2024-10-07 05:43:15.706532] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028b70 00:24:11.993 05:43:15 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:24:11.993 [2024-10-07 05:43:15.725152] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:12.930 05:43:16 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:12.930 05:43:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:12.930 05:43:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:12.930 05:43:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:12.930 05:43:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:12.930 05:43:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.930 05:43:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.188 05:43:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:13.188 "name": "raid_bdev1", 00:24:13.188 "uuid": "f5fac81b-68a2-4b01-8938-c69e3cc8ce22", 00:24:13.188 "strip_size_kb": 64, 00:24:13.188 "state": "online", 00:24:13.188 "raid_level": "raid5f", 00:24:13.188 "superblock": true, 00:24:13.188 "num_base_bdevs": 3, 00:24:13.188 "num_base_bdevs_discovered": 3, 00:24:13.188 "num_base_bdevs_operational": 3, 00:24:13.188 "process": { 00:24:13.188 "type": "rebuild", 00:24:13.188 "target": "spare", 00:24:13.188 "progress": { 00:24:13.188 "blocks": 24576, 00:24:13.188 "percent": 19 00:24:13.188 } 00:24:13.188 }, 00:24:13.188 "base_bdevs_list": [ 00:24:13.188 { 00:24:13.188 "name": "spare", 00:24:13.188 "uuid": "6c008be1-7279-5d0a-8086-a16d2d5bd8ad", 00:24:13.188 "is_configured": true, 00:24:13.188 "data_offset": 2048, 00:24:13.188 "data_size": 63488 00:24:13.188 }, 00:24:13.188 { 00:24:13.188 "name": "BaseBdev2", 00:24:13.188 "uuid": "c5729fdb-27ff-5cef-a5a9-269311f78ae0", 00:24:13.188 "is_configured": true, 00:24:13.188 "data_offset": 2048, 00:24:13.188 "data_size": 63488 00:24:13.188 }, 00:24:13.188 { 00:24:13.188 "name": "BaseBdev3", 00:24:13.188 "uuid": "55560c73-2452-5fc6-9efe-89b703d98e0c", 00:24:13.188 "is_configured": true, 00:24:13.188 "data_offset": 2048, 00:24:13.188 "data_size": 63488 00:24:13.188 } 00:24:13.188 ] 00:24:13.188 }' 00:24:13.188 05:43:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:13.188 05:43:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:13.188 05:43:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:13.188 05:43:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:13.188 05:43:17 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:13.448 [2024-10-07 05:43:17.274786] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:13.448 [2024-10-07 05:43:17.339976] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:13.448 [2024-10-07 05:43:17.340052] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:13.448 05:43:17 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:13.448 05:43:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:13.448 05:43:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:13.448 05:43:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:13.448 05:43:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:13.448 05:43:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:13.448 05:43:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:13.448 05:43:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:13.448 05:43:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:13.448 05:43:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:13.448 05:43:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.448 05:43:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.707 05:43:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:13.707 "name": "raid_bdev1", 00:24:13.707 "uuid": "f5fac81b-68a2-4b01-8938-c69e3cc8ce22", 00:24:13.707 "strip_size_kb": 64, 00:24:13.707 "state": "online", 00:24:13.707 "raid_level": "raid5f", 00:24:13.707 "superblock": true, 00:24:13.707 "num_base_bdevs": 3, 00:24:13.707 "num_base_bdevs_discovered": 2, 00:24:13.707 "num_base_bdevs_operational": 2, 00:24:13.707 "base_bdevs_list": [ 00:24:13.707 { 00:24:13.707 "name": null, 00:24:13.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.707 "is_configured": false, 00:24:13.707 "data_offset": 2048, 00:24:13.707 "data_size": 63488 00:24:13.707 }, 00:24:13.707 { 00:24:13.707 "name": "BaseBdev2", 00:24:13.707 "uuid": "c5729fdb-27ff-5cef-a5a9-269311f78ae0", 00:24:13.707 "is_configured": true, 00:24:13.707 "data_offset": 2048, 00:24:13.707 "data_size": 63488 00:24:13.707 }, 00:24:13.707 { 00:24:13.707 "name": "BaseBdev3", 00:24:13.707 "uuid": "55560c73-2452-5fc6-9efe-89b703d98e0c", 00:24:13.707 "is_configured": true, 00:24:13.707 "data_offset": 2048, 00:24:13.707 "data_size": 63488 00:24:13.707 } 00:24:13.707 ] 00:24:13.707 }' 00:24:13.707 05:43:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:13.707 05:43:17 -- common/autotest_common.sh@10 -- # set +x 00:24:14.274 05:43:18 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:14.274 05:43:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:14.274 05:43:18 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:14.274 05:43:18 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:14.274 05:43:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:14.274 05:43:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.274 05:43:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.533 05:43:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:14.533 "name": "raid_bdev1", 00:24:14.533 "uuid": "f5fac81b-68a2-4b01-8938-c69e3cc8ce22", 00:24:14.533 "strip_size_kb": 64, 00:24:14.533 "state": "online", 00:24:14.533 "raid_level": "raid5f", 00:24:14.533 "superblock": true, 00:24:14.533 "num_base_bdevs": 3, 00:24:14.533 "num_base_bdevs_discovered": 2, 00:24:14.533 "num_base_bdevs_operational": 2, 00:24:14.533 "base_bdevs_list": [ 00:24:14.533 { 00:24:14.533 "name": null, 00:24:14.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.533 "is_configured": false, 00:24:14.533 "data_offset": 2048, 00:24:14.533 "data_size": 63488 00:24:14.533 }, 00:24:14.533 { 00:24:14.533 "name": "BaseBdev2", 00:24:14.533 "uuid": "c5729fdb-27ff-5cef-a5a9-269311f78ae0", 00:24:14.533 "is_configured": true, 00:24:14.533 "data_offset": 2048, 00:24:14.533 "data_size": 63488 00:24:14.533 }, 00:24:14.533 { 00:24:14.533 "name": "BaseBdev3", 00:24:14.533 "uuid": "55560c73-2452-5fc6-9efe-89b703d98e0c", 00:24:14.533 "is_configured": true, 00:24:14.533 "data_offset": 2048, 00:24:14.533 "data_size": 63488 00:24:14.533 } 00:24:14.533 ] 00:24:14.533 }' 00:24:14.533 05:43:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:14.792 05:43:18 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:14.792 05:43:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:14.792 05:43:18 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:14.792 05:43:18 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:15.051 [2024-10-07 05:43:18.810414] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:15.051 [2024-10-07 05:43:18.810461] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:15.051 [2024-10-07 05:43:18.821097] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028d10 00:24:15.051 [2024-10-07 05:43:18.826978] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:15.051 05:43:18 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:15.988 05:43:19 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:15.988 05:43:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:15.988 05:43:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:15.988 05:43:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:15.988 05:43:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:15.988 05:43:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.988 05:43:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.247 05:43:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:16.247 "name": "raid_bdev1", 00:24:16.247 "uuid": "f5fac81b-68a2-4b01-8938-c69e3cc8ce22", 00:24:16.247 "strip_size_kb": 64, 00:24:16.247 "state": "online", 00:24:16.247 "raid_level": "raid5f", 00:24:16.247 "superblock": true, 00:24:16.247 "num_base_bdevs": 3, 00:24:16.247 "num_base_bdevs_discovered": 3, 00:24:16.247 "num_base_bdevs_operational": 3, 00:24:16.247 "process": { 00:24:16.247 "type": "rebuild", 00:24:16.247 "target": "spare", 00:24:16.247 "progress": { 00:24:16.247 "blocks": 24576, 00:24:16.247 "percent": 19 00:24:16.247 } 00:24:16.247 }, 00:24:16.247 "base_bdevs_list": [ 00:24:16.247 { 00:24:16.247 "name": "spare", 00:24:16.247 "uuid": "6c008be1-7279-5d0a-8086-a16d2d5bd8ad", 00:24:16.247 "is_configured": true, 00:24:16.247 "data_offset": 2048, 00:24:16.247 "data_size": 63488 00:24:16.247 }, 00:24:16.247 { 00:24:16.247 "name": "BaseBdev2", 00:24:16.247 "uuid": "c5729fdb-27ff-5cef-a5a9-269311f78ae0", 00:24:16.247 "is_configured": true, 00:24:16.247 "data_offset": 2048, 00:24:16.247 "data_size": 63488 00:24:16.247 }, 00:24:16.247 { 00:24:16.247 "name": "BaseBdev3", 00:24:16.247 "uuid": "55560c73-2452-5fc6-9efe-89b703d98e0c", 00:24:16.247 "is_configured": true, 00:24:16.247 "data_offset": 2048, 00:24:16.247 "data_size": 63488 00:24:16.247 } 00:24:16.247 ] 00:24:16.247 }' 00:24:16.247 05:43:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:16.247 05:43:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:16.247 05:43:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:16.247 05:43:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:16.247 05:43:20 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:24:16.247 05:43:20 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:24:16.247 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:24:16.247 05:43:20 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:24:16.247 05:43:20 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:24:16.247 05:43:20 -- bdev/bdev_raid.sh@657 -- # local timeout=640 00:24:16.247 05:43:20 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:16.247 05:43:20 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:16.247 05:43:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:16.247 05:43:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:16.247 05:43:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:16.247 05:43:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:16.247 05:43:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.247 05:43:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.506 05:43:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:16.506 "name": "raid_bdev1", 00:24:16.506 "uuid": "f5fac81b-68a2-4b01-8938-c69e3cc8ce22", 00:24:16.506 "strip_size_kb": 64, 00:24:16.506 "state": "online", 00:24:16.506 "raid_level": "raid5f", 00:24:16.506 "superblock": true, 00:24:16.506 "num_base_bdevs": 3, 00:24:16.506 "num_base_bdevs_discovered": 3, 00:24:16.506 "num_base_bdevs_operational": 3, 00:24:16.506 "process": { 00:24:16.506 "type": "rebuild", 00:24:16.506 "target": "spare", 00:24:16.506 "progress": { 00:24:16.506 "blocks": 30720, 00:24:16.506 "percent": 24 00:24:16.506 } 00:24:16.506 }, 00:24:16.506 "base_bdevs_list": [ 00:24:16.506 { 00:24:16.506 "name": "spare", 00:24:16.506 "uuid": "6c008be1-7279-5d0a-8086-a16d2d5bd8ad", 00:24:16.506 "is_configured": true, 00:24:16.506 "data_offset": 2048, 00:24:16.506 "data_size": 63488 00:24:16.507 }, 00:24:16.507 { 00:24:16.507 "name": "BaseBdev2", 00:24:16.507 "uuid": "c5729fdb-27ff-5cef-a5a9-269311f78ae0", 00:24:16.507 "is_configured": true, 00:24:16.507 "data_offset": 2048, 00:24:16.507 "data_size": 63488 00:24:16.507 }, 00:24:16.507 { 00:24:16.507 "name": "BaseBdev3", 00:24:16.507 "uuid": "55560c73-2452-5fc6-9efe-89b703d98e0c", 00:24:16.507 "is_configured": true, 00:24:16.507 "data_offset": 2048, 00:24:16.507 "data_size": 63488 00:24:16.507 } 00:24:16.507 ] 00:24:16.507 }' 00:24:16.507 05:43:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:16.765 05:43:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:16.765 05:43:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:16.765 05:43:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:16.765 05:43:20 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:17.703 05:43:21 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:17.703 05:43:21 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:17.703 05:43:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:17.703 05:43:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:17.703 05:43:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:17.703 05:43:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:17.703 05:43:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.703 05:43:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.962 05:43:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:17.962 "name": "raid_bdev1", 00:24:17.962 "uuid": "f5fac81b-68a2-4b01-8938-c69e3cc8ce22", 00:24:17.962 "strip_size_kb": 64, 00:24:17.962 "state": "online", 00:24:17.962 "raid_level": "raid5f", 00:24:17.962 "superblock": true, 00:24:17.962 "num_base_bdevs": 3, 00:24:17.962 "num_base_bdevs_discovered": 3, 00:24:17.962 "num_base_bdevs_operational": 3, 00:24:17.962 "process": { 00:24:17.962 "type": "rebuild", 00:24:17.962 "target": "spare", 00:24:17.962 "progress": { 00:24:17.962 "blocks": 59392, 00:24:17.962 "percent": 46 00:24:17.962 } 00:24:17.962 }, 00:24:17.962 "base_bdevs_list": [ 00:24:17.962 { 00:24:17.962 "name": "spare", 00:24:17.962 "uuid": "6c008be1-7279-5d0a-8086-a16d2d5bd8ad", 00:24:17.962 "is_configured": true, 00:24:17.962 "data_offset": 2048, 00:24:17.962 "data_size": 63488 00:24:17.962 }, 00:24:17.962 { 00:24:17.962 "name": "BaseBdev2", 00:24:17.962 "uuid": "c5729fdb-27ff-5cef-a5a9-269311f78ae0", 00:24:17.962 "is_configured": true, 00:24:17.962 "data_offset": 2048, 00:24:17.962 "data_size": 63488 00:24:17.962 }, 00:24:17.962 { 00:24:17.962 "name": "BaseBdev3", 00:24:17.962 "uuid": "55560c73-2452-5fc6-9efe-89b703d98e0c", 00:24:17.962 "is_configured": true, 00:24:17.962 "data_offset": 2048, 00:24:17.962 "data_size": 63488 00:24:17.962 } 00:24:17.962 ] 00:24:17.962 }' 00:24:17.962 05:43:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:17.962 05:43:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:17.962 05:43:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:17.962 05:43:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:17.962 05:43:21 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:18.900 05:43:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:18.900 05:43:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:18.900 05:43:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:18.900 05:43:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:18.900 05:43:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:18.900 05:43:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:18.900 05:43:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.900 05:43:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.159 05:43:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:19.159 "name": "raid_bdev1", 00:24:19.159 "uuid": "f5fac81b-68a2-4b01-8938-c69e3cc8ce22", 00:24:19.159 "strip_size_kb": 64, 00:24:19.159 "state": "online", 00:24:19.159 "raid_level": "raid5f", 00:24:19.159 "superblock": true, 00:24:19.159 "num_base_bdevs": 3, 00:24:19.159 "num_base_bdevs_discovered": 3, 00:24:19.159 "num_base_bdevs_operational": 3, 00:24:19.159 "process": { 00:24:19.159 "type": "rebuild", 00:24:19.159 "target": "spare", 00:24:19.159 "progress": { 00:24:19.159 "blocks": 86016, 00:24:19.159 "percent": 67 00:24:19.159 } 00:24:19.159 }, 00:24:19.159 "base_bdevs_list": [ 00:24:19.159 { 00:24:19.159 "name": "spare", 00:24:19.159 "uuid": "6c008be1-7279-5d0a-8086-a16d2d5bd8ad", 00:24:19.159 "is_configured": true, 00:24:19.159 "data_offset": 2048, 00:24:19.159 "data_size": 63488 00:24:19.159 }, 00:24:19.159 { 00:24:19.159 "name": "BaseBdev2", 00:24:19.159 "uuid": "c5729fdb-27ff-5cef-a5a9-269311f78ae0", 00:24:19.159 "is_configured": true, 00:24:19.159 "data_offset": 2048, 00:24:19.159 "data_size": 63488 00:24:19.159 }, 00:24:19.159 { 00:24:19.159 "name": "BaseBdev3", 00:24:19.159 "uuid": "55560c73-2452-5fc6-9efe-89b703d98e0c", 00:24:19.159 "is_configured": true, 00:24:19.159 "data_offset": 2048, 00:24:19.159 "data_size": 63488 00:24:19.159 } 00:24:19.159 ] 00:24:19.159 }' 00:24:19.159 05:43:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:19.418 05:43:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:19.418 05:43:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:19.418 05:43:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:19.418 05:43:23 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:20.354 05:43:24 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:20.354 05:43:24 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:20.354 05:43:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:20.354 05:43:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:20.354 05:43:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:20.354 05:43:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:20.354 05:43:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.354 05:43:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.613 05:43:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:20.613 "name": "raid_bdev1", 00:24:20.613 "uuid": "f5fac81b-68a2-4b01-8938-c69e3cc8ce22", 00:24:20.613 "strip_size_kb": 64, 00:24:20.613 "state": "online", 00:24:20.613 "raid_level": "raid5f", 00:24:20.613 "superblock": true, 00:24:20.613 "num_base_bdevs": 3, 00:24:20.613 "num_base_bdevs_discovered": 3, 00:24:20.613 "num_base_bdevs_operational": 3, 00:24:20.613 "process": { 00:24:20.613 "type": "rebuild", 00:24:20.613 "target": "spare", 00:24:20.613 "progress": { 00:24:20.613 "blocks": 112640, 00:24:20.613 "percent": 88 00:24:20.613 } 00:24:20.613 }, 00:24:20.613 "base_bdevs_list": [ 00:24:20.613 { 00:24:20.613 "name": "spare", 00:24:20.613 "uuid": "6c008be1-7279-5d0a-8086-a16d2d5bd8ad", 00:24:20.613 "is_configured": true, 00:24:20.613 "data_offset": 2048, 00:24:20.613 "data_size": 63488 00:24:20.613 }, 00:24:20.613 { 00:24:20.613 "name": "BaseBdev2", 00:24:20.613 "uuid": "c5729fdb-27ff-5cef-a5a9-269311f78ae0", 00:24:20.613 "is_configured": true, 00:24:20.613 "data_offset": 2048, 00:24:20.613 "data_size": 63488 00:24:20.613 }, 00:24:20.613 { 00:24:20.613 "name": "BaseBdev3", 00:24:20.613 "uuid": "55560c73-2452-5fc6-9efe-89b703d98e0c", 00:24:20.613 "is_configured": true, 00:24:20.613 "data_offset": 2048, 00:24:20.613 "data_size": 63488 00:24:20.613 } 00:24:20.613 ] 00:24:20.613 }' 00:24:20.613 05:43:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:20.613 05:43:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:20.613 05:43:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:20.905 05:43:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:20.905 05:43:24 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:21.181 [2024-10-07 05:43:25.082729] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:21.181 [2024-10-07 05:43:25.082802] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:21.181 [2024-10-07 05:43:25.082942] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:21.750 05:43:25 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:21.750 05:43:25 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:21.750 05:43:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:21.750 05:43:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:21.750 05:43:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:21.750 05:43:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:21.750 05:43:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.750 05:43:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:22.008 05:43:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:22.008 "name": "raid_bdev1", 00:24:22.008 "uuid": "f5fac81b-68a2-4b01-8938-c69e3cc8ce22", 00:24:22.008 "strip_size_kb": 64, 00:24:22.008 "state": "online", 00:24:22.008 "raid_level": "raid5f", 00:24:22.008 "superblock": true, 00:24:22.008 "num_base_bdevs": 3, 00:24:22.008 "num_base_bdevs_discovered": 3, 00:24:22.008 "num_base_bdevs_operational": 3, 00:24:22.008 "base_bdevs_list": [ 00:24:22.008 { 00:24:22.008 "name": "spare", 00:24:22.008 "uuid": "6c008be1-7279-5d0a-8086-a16d2d5bd8ad", 00:24:22.008 "is_configured": true, 00:24:22.008 "data_offset": 2048, 00:24:22.008 "data_size": 63488 00:24:22.008 }, 00:24:22.008 { 00:24:22.008 "name": "BaseBdev2", 00:24:22.008 "uuid": "c5729fdb-27ff-5cef-a5a9-269311f78ae0", 00:24:22.008 "is_configured": true, 00:24:22.008 "data_offset": 2048, 00:24:22.008 "data_size": 63488 00:24:22.008 }, 00:24:22.008 { 00:24:22.008 "name": "BaseBdev3", 00:24:22.008 "uuid": "55560c73-2452-5fc6-9efe-89b703d98e0c", 00:24:22.008 "is_configured": true, 00:24:22.008 "data_offset": 2048, 00:24:22.008 "data_size": 63488 00:24:22.008 } 00:24:22.008 ] 00:24:22.008 }' 00:24:22.008 05:43:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:22.008 05:43:25 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:22.008 05:43:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:22.008 05:43:25 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:22.008 05:43:25 -- bdev/bdev_raid.sh@660 -- # break 00:24:22.009 05:43:25 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:22.009 05:43:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:22.009 05:43:25 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:22.009 05:43:25 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:22.009 05:43:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:22.009 05:43:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.009 05:43:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:22.267 05:43:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:22.267 "name": "raid_bdev1", 00:24:22.267 "uuid": "f5fac81b-68a2-4b01-8938-c69e3cc8ce22", 00:24:22.267 "strip_size_kb": 64, 00:24:22.267 "state": "online", 00:24:22.267 "raid_level": "raid5f", 00:24:22.267 "superblock": true, 00:24:22.267 "num_base_bdevs": 3, 00:24:22.267 "num_base_bdevs_discovered": 3, 00:24:22.267 "num_base_bdevs_operational": 3, 00:24:22.267 "base_bdevs_list": [ 00:24:22.267 { 00:24:22.267 "name": "spare", 00:24:22.267 "uuid": "6c008be1-7279-5d0a-8086-a16d2d5bd8ad", 00:24:22.267 "is_configured": true, 00:24:22.267 "data_offset": 2048, 00:24:22.267 "data_size": 63488 00:24:22.267 }, 00:24:22.267 { 00:24:22.267 "name": "BaseBdev2", 00:24:22.267 "uuid": "c5729fdb-27ff-5cef-a5a9-269311f78ae0", 00:24:22.267 "is_configured": true, 00:24:22.267 "data_offset": 2048, 00:24:22.267 "data_size": 63488 00:24:22.267 }, 00:24:22.267 { 00:24:22.267 "name": "BaseBdev3", 00:24:22.267 "uuid": "55560c73-2452-5fc6-9efe-89b703d98e0c", 00:24:22.267 "is_configured": true, 00:24:22.267 "data_offset": 2048, 00:24:22.267 "data_size": 63488 00:24:22.267 } 00:24:22.267 ] 00:24:22.267 }' 00:24:22.267 05:43:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:22.267 05:43:26 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:22.526 05:43:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:22.526 05:43:26 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:22.526 05:43:26 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:22.526 05:43:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:22.526 05:43:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:22.526 05:43:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:22.526 05:43:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:22.526 05:43:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:22.526 05:43:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:22.526 05:43:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:22.526 05:43:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:22.526 05:43:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:22.526 05:43:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.526 05:43:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:22.526 05:43:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:22.526 "name": "raid_bdev1", 00:24:22.526 "uuid": "f5fac81b-68a2-4b01-8938-c69e3cc8ce22", 00:24:22.526 "strip_size_kb": 64, 00:24:22.526 "state": "online", 00:24:22.526 "raid_level": "raid5f", 00:24:22.526 "superblock": true, 00:24:22.526 "num_base_bdevs": 3, 00:24:22.526 "num_base_bdevs_discovered": 3, 00:24:22.526 "num_base_bdevs_operational": 3, 00:24:22.526 "base_bdevs_list": [ 00:24:22.526 { 00:24:22.526 "name": "spare", 00:24:22.526 "uuid": "6c008be1-7279-5d0a-8086-a16d2d5bd8ad", 00:24:22.526 "is_configured": true, 00:24:22.526 "data_offset": 2048, 00:24:22.526 "data_size": 63488 00:24:22.526 }, 00:24:22.526 { 00:24:22.526 "name": "BaseBdev2", 00:24:22.526 "uuid": "c5729fdb-27ff-5cef-a5a9-269311f78ae0", 00:24:22.526 "is_configured": true, 00:24:22.526 "data_offset": 2048, 00:24:22.526 "data_size": 63488 00:24:22.526 }, 00:24:22.526 { 00:24:22.526 "name": "BaseBdev3", 00:24:22.526 "uuid": "55560c73-2452-5fc6-9efe-89b703d98e0c", 00:24:22.526 "is_configured": true, 00:24:22.526 "data_offset": 2048, 00:24:22.526 "data_size": 63488 00:24:22.526 } 00:24:22.526 ] 00:24:22.526 }' 00:24:22.526 05:43:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:22.526 05:43:26 -- common/autotest_common.sh@10 -- # set +x 00:24:23.094 05:43:27 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:23.353 [2024-10-07 05:43:27.285541] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:23.353 [2024-10-07 05:43:27.285574] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:23.353 [2024-10-07 05:43:27.285686] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:23.353 [2024-10-07 05:43:27.285780] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:23.353 [2024-10-07 05:43:27.285794] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:24:23.353 05:43:27 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.353 05:43:27 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:23.611 05:43:27 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:23.611 05:43:27 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:24:23.611 05:43:27 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:23.611 05:43:27 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:23.611 05:43:27 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:23.611 05:43:27 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:23.611 05:43:27 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:23.611 05:43:27 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:23.611 05:43:27 -- bdev/nbd_common.sh@12 -- # local i 00:24:23.611 05:43:27 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:23.611 05:43:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:23.611 05:43:27 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:23.870 /dev/nbd0 00:24:23.870 05:43:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:23.870 05:43:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:23.870 05:43:27 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:23.870 05:43:27 -- common/autotest_common.sh@857 -- # local i 00:24:23.870 05:43:27 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:23.870 05:43:27 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:23.870 05:43:27 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:23.870 05:43:27 -- common/autotest_common.sh@861 -- # break 00:24:23.870 05:43:27 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:23.870 05:43:27 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:23.870 05:43:27 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:23.870 1+0 records in 00:24:23.870 1+0 records out 00:24:23.870 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468116 s, 8.7 MB/s 00:24:23.870 05:43:27 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:23.870 05:43:27 -- common/autotest_common.sh@874 -- # size=4096 00:24:23.870 05:43:27 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:23.870 05:43:27 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:23.870 05:43:27 -- common/autotest_common.sh@877 -- # return 0 00:24:23.870 05:43:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:23.870 05:43:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:23.870 05:43:27 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:24.129 /dev/nbd1 00:24:24.129 05:43:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:24.129 05:43:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:24.129 05:43:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:24:24.129 05:43:28 -- common/autotest_common.sh@857 -- # local i 00:24:24.129 05:43:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:24.129 05:43:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:24.129 05:43:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:24:24.129 05:43:28 -- common/autotest_common.sh@861 -- # break 00:24:24.129 05:43:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:24.129 05:43:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:24.129 05:43:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:24.129 1+0 records in 00:24:24.129 1+0 records out 00:24:24.130 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000586868 s, 7.0 MB/s 00:24:24.130 05:43:28 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:24.130 05:43:28 -- common/autotest_common.sh@874 -- # size=4096 00:24:24.130 05:43:28 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:24.130 05:43:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:24.130 05:43:28 -- common/autotest_common.sh@877 -- # return 0 00:24:24.130 05:43:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:24.130 05:43:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:24.130 05:43:28 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:24.389 05:43:28 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:24.389 05:43:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:24.389 05:43:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:24.389 05:43:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:24.389 05:43:28 -- bdev/nbd_common.sh@51 -- # local i 00:24:24.389 05:43:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:24.389 05:43:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:24.647 05:43:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:24.647 05:43:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:24.647 05:43:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:24.647 05:43:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:24.647 05:43:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:24.647 05:43:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:24.647 05:43:28 -- bdev/nbd_common.sh@41 -- # break 00:24:24.647 05:43:28 -- bdev/nbd_common.sh@45 -- # return 0 00:24:24.647 05:43:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:24.647 05:43:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:24.905 05:43:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:24.905 05:43:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:24.905 05:43:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:24.905 05:43:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:24.905 05:43:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:24.905 05:43:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:24.905 05:43:28 -- bdev/nbd_common.sh@41 -- # break 00:24:24.905 05:43:28 -- bdev/nbd_common.sh@45 -- # return 0 00:24:24.905 05:43:28 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:24:24.905 05:43:28 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:24.905 05:43:28 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:24:24.905 05:43:28 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:25.165 05:43:29 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:25.424 [2024-10-07 05:43:29.253671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:25.424 [2024-10-07 05:43:29.253773] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:25.424 [2024-10-07 05:43:29.253812] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:24:25.424 [2024-10-07 05:43:29.253840] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:25.424 [2024-10-07 05:43:29.256216] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:25.424 [2024-10-07 05:43:29.256289] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:25.424 [2024-10-07 05:43:29.256393] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:25.424 [2024-10-07 05:43:29.256462] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:25.424 BaseBdev1 00:24:25.424 05:43:29 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:25.424 05:43:29 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:24:25.424 05:43:29 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:24:25.683 05:43:29 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:25.942 [2024-10-07 05:43:29.665721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:25.942 [2024-10-07 05:43:29.665777] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:25.942 [2024-10-07 05:43:29.665813] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:24:25.942 [2024-10-07 05:43:29.665832] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:25.942 [2024-10-07 05:43:29.666193] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:25.942 [2024-10-07 05:43:29.666247] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:25.942 [2024-10-07 05:43:29.666328] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:24:25.942 [2024-10-07 05:43:29.666342] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:24:25.942 [2024-10-07 05:43:29.666349] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:25.942 [2024-10-07 05:43:29.666370] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state configuring 00:24:25.942 [2024-10-07 05:43:29.666420] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:25.942 BaseBdev2 00:24:25.942 05:43:29 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:25.942 05:43:29 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:24:25.942 05:43:29 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:24:25.942 05:43:29 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:26.201 [2024-10-07 05:43:30.041800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:26.201 [2024-10-07 05:43:30.041869] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:26.201 [2024-10-07 05:43:30.041918] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:24:26.201 [2024-10-07 05:43:30.041938] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:26.201 [2024-10-07 05:43:30.042308] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:26.201 [2024-10-07 05:43:30.042365] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:26.201 [2024-10-07 05:43:30.042447] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:24:26.201 [2024-10-07 05:43:30.042467] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:26.201 BaseBdev3 00:24:26.202 05:43:30 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:26.461 05:43:30 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:26.461 [2024-10-07 05:43:30.417875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:26.461 [2024-10-07 05:43:30.417947] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:26.461 [2024-10-07 05:43:30.417986] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:24:26.461 [2024-10-07 05:43:30.418014] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:26.461 [2024-10-07 05:43:30.418403] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:26.461 [2024-10-07 05:43:30.418461] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:26.461 [2024-10-07 05:43:30.418559] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:24:26.461 [2024-10-07 05:43:30.418582] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:26.461 spare 00:24:26.461 05:43:30 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:26.461 05:43:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:26.461 05:43:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:26.461 05:43:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:26.461 05:43:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:26.461 05:43:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:26.461 05:43:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:26.461 05:43:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:26.461 05:43:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:26.461 05:43:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:26.720 05:43:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.720 05:43:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:26.720 [2024-10-07 05:43:30.518680] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b480 00:24:26.720 [2024-10-07 05:43:30.518706] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:26.720 [2024-10-07 05:43:30.518812] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:24:26.720 [2024-10-07 05:43:30.523134] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b480 00:24:26.720 [2024-10-07 05:43:30.523160] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b480 00:24:26.720 [2024-10-07 05:43:30.523391] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:26.720 05:43:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:26.720 "name": "raid_bdev1", 00:24:26.720 "uuid": "f5fac81b-68a2-4b01-8938-c69e3cc8ce22", 00:24:26.720 "strip_size_kb": 64, 00:24:26.720 "state": "online", 00:24:26.720 "raid_level": "raid5f", 00:24:26.720 "superblock": true, 00:24:26.720 "num_base_bdevs": 3, 00:24:26.720 "num_base_bdevs_discovered": 3, 00:24:26.720 "num_base_bdevs_operational": 3, 00:24:26.720 "base_bdevs_list": [ 00:24:26.720 { 00:24:26.720 "name": "spare", 00:24:26.720 "uuid": "6c008be1-7279-5d0a-8086-a16d2d5bd8ad", 00:24:26.720 "is_configured": true, 00:24:26.720 "data_offset": 2048, 00:24:26.720 "data_size": 63488 00:24:26.720 }, 00:24:26.720 { 00:24:26.720 "name": "BaseBdev2", 00:24:26.720 "uuid": "c5729fdb-27ff-5cef-a5a9-269311f78ae0", 00:24:26.720 "is_configured": true, 00:24:26.720 "data_offset": 2048, 00:24:26.720 "data_size": 63488 00:24:26.720 }, 00:24:26.720 { 00:24:26.720 "name": "BaseBdev3", 00:24:26.720 "uuid": "55560c73-2452-5fc6-9efe-89b703d98e0c", 00:24:26.720 "is_configured": true, 00:24:26.720 "data_offset": 2048, 00:24:26.720 "data_size": 63488 00:24:26.720 } 00:24:26.720 ] 00:24:26.720 }' 00:24:26.720 05:43:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:26.720 05:43:30 -- common/autotest_common.sh@10 -- # set +x 00:24:27.658 05:43:31 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:27.658 05:43:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:27.658 05:43:31 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:27.658 05:43:31 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:27.658 05:43:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:27.658 05:43:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.658 05:43:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:27.658 05:43:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:27.658 "name": "raid_bdev1", 00:24:27.658 "uuid": "f5fac81b-68a2-4b01-8938-c69e3cc8ce22", 00:24:27.658 "strip_size_kb": 64, 00:24:27.658 "state": "online", 00:24:27.658 "raid_level": "raid5f", 00:24:27.658 "superblock": true, 00:24:27.658 "num_base_bdevs": 3, 00:24:27.658 "num_base_bdevs_discovered": 3, 00:24:27.658 "num_base_bdevs_operational": 3, 00:24:27.658 "base_bdevs_list": [ 00:24:27.658 { 00:24:27.658 "name": "spare", 00:24:27.658 "uuid": "6c008be1-7279-5d0a-8086-a16d2d5bd8ad", 00:24:27.658 "is_configured": true, 00:24:27.658 "data_offset": 2048, 00:24:27.658 "data_size": 63488 00:24:27.658 }, 00:24:27.658 { 00:24:27.658 "name": "BaseBdev2", 00:24:27.658 "uuid": "c5729fdb-27ff-5cef-a5a9-269311f78ae0", 00:24:27.658 "is_configured": true, 00:24:27.658 "data_offset": 2048, 00:24:27.658 "data_size": 63488 00:24:27.658 }, 00:24:27.658 { 00:24:27.658 "name": "BaseBdev3", 00:24:27.658 "uuid": "55560c73-2452-5fc6-9efe-89b703d98e0c", 00:24:27.658 "is_configured": true, 00:24:27.658 "data_offset": 2048, 00:24:27.658 "data_size": 63488 00:24:27.658 } 00:24:27.658 ] 00:24:27.658 }' 00:24:27.658 05:43:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:27.658 05:43:31 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:27.658 05:43:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:27.658 05:43:31 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:27.658 05:43:31 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.658 05:43:31 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:27.917 05:43:31 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:24:27.917 05:43:31 -- bdev/bdev_raid.sh@709 -- # killprocess 172532 00:24:27.917 05:43:31 -- common/autotest_common.sh@926 -- # '[' -z 172532 ']' 00:24:27.917 05:43:31 -- common/autotest_common.sh@930 -- # kill -0 172532 00:24:27.917 05:43:31 -- common/autotest_common.sh@931 -- # uname 00:24:27.918 05:43:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:27.918 05:43:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 172532 00:24:28.177 05:43:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:28.177 05:43:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:28.177 05:43:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 172532' 00:24:28.177 killing process with pid 172532 00:24:28.177 05:43:31 -- common/autotest_common.sh@945 -- # kill 172532 00:24:28.177 Received shutdown signal, test time was about 60.000000 seconds 00:24:28.177 00:24:28.177 Latency(us) 00:24:28.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.177 =================================================================================================================== 00:24:28.177 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:28.177 [2024-10-07 05:43:31.904563] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:28.177 05:43:31 -- common/autotest_common.sh@950 -- # wait 172532 00:24:28.177 [2024-10-07 05:43:31.904633] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:28.177 [2024-10-07 05:43:31.904721] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:28.177 [2024-10-07 05:43:31.904734] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b480 name raid_bdev1, state offline 00:24:28.436 [2024-10-07 05:43:32.172215] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:29.373 ************************************ 00:24:29.373 END TEST raid5f_rebuild_test_sb 00:24:29.373 ************************************ 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:29.373 00:24:29.373 real 0m24.244s 00:24:29.373 user 0m37.452s 00:24:29.373 sys 0m3.046s 00:24:29.373 05:43:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:29.373 05:43:33 -- common/autotest_common.sh@10 -- # set +x 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:24:29.373 05:43:33 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:24:29.373 05:43:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:29.373 05:43:33 -- common/autotest_common.sh@10 -- # set +x 00:24:29.373 ************************************ 00:24:29.373 START TEST raid5f_state_function_test 00:24:29.373 ************************************ 00:24:29.373 05:43:33 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 4 false 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@226 -- # raid_pid=173170 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 173170' 00:24:29.373 Process raid pid: 173170 00:24:29.373 05:43:33 -- bdev/bdev_raid.sh@228 -- # waitforlisten 173170 /var/tmp/spdk-raid.sock 00:24:29.374 05:43:33 -- common/autotest_common.sh@819 -- # '[' -z 173170 ']' 00:24:29.374 05:43:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:29.374 05:43:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:29.374 05:43:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:29.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:29.374 05:43:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:29.374 05:43:33 -- common/autotest_common.sh@10 -- # set +x 00:24:29.374 [2024-10-07 05:43:33.341410] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:29.374 [2024-10-07 05:43:33.341613] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:29.633 [2024-10-07 05:43:33.512452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.892 [2024-10-07 05:43:33.695886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.151 [2024-10-07 05:43:33.886369] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:30.410 05:43:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:30.410 05:43:34 -- common/autotest_common.sh@852 -- # return 0 00:24:30.410 05:43:34 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:30.669 [2024-10-07 05:43:34.417390] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:30.669 [2024-10-07 05:43:34.417482] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:30.669 [2024-10-07 05:43:34.417496] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:30.669 [2024-10-07 05:43:34.417523] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:30.669 [2024-10-07 05:43:34.417530] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:30.669 [2024-10-07 05:43:34.417570] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:30.669 [2024-10-07 05:43:34.417579] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:30.669 [2024-10-07 05:43:34.417602] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:30.669 05:43:34 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:30.669 05:43:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:30.669 05:43:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:30.669 05:43:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:30.669 05:43:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:30.669 05:43:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:30.669 05:43:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:30.669 05:43:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:30.669 05:43:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:30.669 05:43:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:30.669 05:43:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:30.669 05:43:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.928 05:43:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:30.928 "name": "Existed_Raid", 00:24:30.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.928 "strip_size_kb": 64, 00:24:30.928 "state": "configuring", 00:24:30.928 "raid_level": "raid5f", 00:24:30.928 "superblock": false, 00:24:30.928 "num_base_bdevs": 4, 00:24:30.928 "num_base_bdevs_discovered": 0, 00:24:30.928 "num_base_bdevs_operational": 4, 00:24:30.928 "base_bdevs_list": [ 00:24:30.928 { 00:24:30.928 "name": "BaseBdev1", 00:24:30.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.928 "is_configured": false, 00:24:30.928 "data_offset": 0, 00:24:30.928 "data_size": 0 00:24:30.928 }, 00:24:30.928 { 00:24:30.928 "name": "BaseBdev2", 00:24:30.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.928 "is_configured": false, 00:24:30.928 "data_offset": 0, 00:24:30.928 "data_size": 0 00:24:30.928 }, 00:24:30.928 { 00:24:30.928 "name": "BaseBdev3", 00:24:30.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.928 "is_configured": false, 00:24:30.928 "data_offset": 0, 00:24:30.928 "data_size": 0 00:24:30.928 }, 00:24:30.928 { 00:24:30.928 "name": "BaseBdev4", 00:24:30.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.928 "is_configured": false, 00:24:30.928 "data_offset": 0, 00:24:30.928 "data_size": 0 00:24:30.928 } 00:24:30.928 ] 00:24:30.928 }' 00:24:30.928 05:43:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:30.928 05:43:34 -- common/autotest_common.sh@10 -- # set +x 00:24:31.496 05:43:35 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:31.496 [2024-10-07 05:43:35.437435] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:31.496 [2024-10-07 05:43:35.437469] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:24:31.496 05:43:35 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:31.755 [2024-10-07 05:43:35.693518] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:31.755 [2024-10-07 05:43:35.693581] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:31.755 [2024-10-07 05:43:35.693593] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:31.755 [2024-10-07 05:43:35.693619] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:31.755 [2024-10-07 05:43:35.693627] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:31.755 [2024-10-07 05:43:35.693664] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:31.755 [2024-10-07 05:43:35.693671] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:31.755 [2024-10-07 05:43:35.693694] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:31.755 05:43:35 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:32.014 [2024-10-07 05:43:35.975091] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:32.014 BaseBdev1 00:24:32.014 05:43:35 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:24:32.014 05:43:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:24:32.014 05:43:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:32.014 05:43:35 -- common/autotest_common.sh@889 -- # local i 00:24:32.014 05:43:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:32.014 05:43:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:32.014 05:43:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:32.272 05:43:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:32.531 [ 00:24:32.531 { 00:24:32.531 "name": "BaseBdev1", 00:24:32.531 "aliases": [ 00:24:32.531 "0b253817-1792-4d5b-b47e-8630578820da" 00:24:32.531 ], 00:24:32.531 "product_name": "Malloc disk", 00:24:32.531 "block_size": 512, 00:24:32.531 "num_blocks": 65536, 00:24:32.531 "uuid": "0b253817-1792-4d5b-b47e-8630578820da", 00:24:32.531 "assigned_rate_limits": { 00:24:32.531 "rw_ios_per_sec": 0, 00:24:32.531 "rw_mbytes_per_sec": 0, 00:24:32.531 "r_mbytes_per_sec": 0, 00:24:32.531 "w_mbytes_per_sec": 0 00:24:32.531 }, 00:24:32.531 "claimed": true, 00:24:32.531 "claim_type": "exclusive_write", 00:24:32.531 "zoned": false, 00:24:32.531 "supported_io_types": { 00:24:32.531 "read": true, 00:24:32.531 "write": true, 00:24:32.531 "unmap": true, 00:24:32.531 "write_zeroes": true, 00:24:32.531 "flush": true, 00:24:32.531 "reset": true, 00:24:32.531 "compare": false, 00:24:32.531 "compare_and_write": false, 00:24:32.531 "abort": true, 00:24:32.531 "nvme_admin": false, 00:24:32.531 "nvme_io": false 00:24:32.531 }, 00:24:32.531 "memory_domains": [ 00:24:32.531 { 00:24:32.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:32.531 "dma_device_type": 2 00:24:32.531 } 00:24:32.531 ], 00:24:32.531 "driver_specific": {} 00:24:32.531 } 00:24:32.531 ] 00:24:32.531 05:43:36 -- common/autotest_common.sh@895 -- # return 0 00:24:32.531 05:43:36 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:32.531 05:43:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:32.531 05:43:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:32.531 05:43:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:32.531 05:43:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:32.531 05:43:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:32.531 05:43:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:32.531 05:43:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:32.531 05:43:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:32.531 05:43:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:32.531 05:43:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.531 05:43:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:32.790 05:43:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:32.790 "name": "Existed_Raid", 00:24:32.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:32.790 "strip_size_kb": 64, 00:24:32.790 "state": "configuring", 00:24:32.790 "raid_level": "raid5f", 00:24:32.790 "superblock": false, 00:24:32.790 "num_base_bdevs": 4, 00:24:32.790 "num_base_bdevs_discovered": 1, 00:24:32.790 "num_base_bdevs_operational": 4, 00:24:32.790 "base_bdevs_list": [ 00:24:32.790 { 00:24:32.790 "name": "BaseBdev1", 00:24:32.790 "uuid": "0b253817-1792-4d5b-b47e-8630578820da", 00:24:32.790 "is_configured": true, 00:24:32.790 "data_offset": 0, 00:24:32.790 "data_size": 65536 00:24:32.790 }, 00:24:32.790 { 00:24:32.790 "name": "BaseBdev2", 00:24:32.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:32.790 "is_configured": false, 00:24:32.790 "data_offset": 0, 00:24:32.790 "data_size": 0 00:24:32.790 }, 00:24:32.790 { 00:24:32.790 "name": "BaseBdev3", 00:24:32.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:32.790 "is_configured": false, 00:24:32.790 "data_offset": 0, 00:24:32.790 "data_size": 0 00:24:32.790 }, 00:24:32.790 { 00:24:32.790 "name": "BaseBdev4", 00:24:32.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:32.790 "is_configured": false, 00:24:32.790 "data_offset": 0, 00:24:32.790 "data_size": 0 00:24:32.790 } 00:24:32.790 ] 00:24:32.790 }' 00:24:32.790 05:43:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:32.790 05:43:36 -- common/autotest_common.sh@10 -- # set +x 00:24:33.358 05:43:37 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:33.616 [2024-10-07 05:43:37.403388] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:33.616 [2024-10-07 05:43:37.403437] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:24:33.616 05:43:37 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:24:33.616 05:43:37 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:33.616 [2024-10-07 05:43:37.587493] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:33.616 [2024-10-07 05:43:37.589356] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:33.616 [2024-10-07 05:43:37.589436] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:33.616 [2024-10-07 05:43:37.589448] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:33.616 [2024-10-07 05:43:37.589475] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:33.616 [2024-10-07 05:43:37.589483] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:33.616 [2024-10-07 05:43:37.589500] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:33.875 05:43:37 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:24:33.875 05:43:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:33.875 05:43:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:33.875 05:43:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:33.875 05:43:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:33.875 05:43:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:33.875 05:43:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:33.875 05:43:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:33.875 05:43:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:33.875 05:43:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:33.875 05:43:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:33.875 05:43:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:33.875 05:43:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.875 05:43:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:33.875 05:43:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:33.875 "name": "Existed_Raid", 00:24:33.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.875 "strip_size_kb": 64, 00:24:33.875 "state": "configuring", 00:24:33.875 "raid_level": "raid5f", 00:24:33.875 "superblock": false, 00:24:33.875 "num_base_bdevs": 4, 00:24:33.875 "num_base_bdevs_discovered": 1, 00:24:33.875 "num_base_bdevs_operational": 4, 00:24:33.875 "base_bdevs_list": [ 00:24:33.875 { 00:24:33.875 "name": "BaseBdev1", 00:24:33.875 "uuid": "0b253817-1792-4d5b-b47e-8630578820da", 00:24:33.875 "is_configured": true, 00:24:33.876 "data_offset": 0, 00:24:33.876 "data_size": 65536 00:24:33.876 }, 00:24:33.876 { 00:24:33.876 "name": "BaseBdev2", 00:24:33.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.876 "is_configured": false, 00:24:33.876 "data_offset": 0, 00:24:33.876 "data_size": 0 00:24:33.876 }, 00:24:33.876 { 00:24:33.876 "name": "BaseBdev3", 00:24:33.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.876 "is_configured": false, 00:24:33.876 "data_offset": 0, 00:24:33.876 "data_size": 0 00:24:33.876 }, 00:24:33.876 { 00:24:33.876 "name": "BaseBdev4", 00:24:33.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.876 "is_configured": false, 00:24:33.876 "data_offset": 0, 00:24:33.876 "data_size": 0 00:24:33.876 } 00:24:33.876 ] 00:24:33.876 }' 00:24:33.876 05:43:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:33.876 05:43:37 -- common/autotest_common.sh@10 -- # set +x 00:24:34.444 05:43:38 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:34.703 [2024-10-07 05:43:38.608482] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:34.703 BaseBdev2 00:24:34.703 05:43:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:24:34.704 05:43:38 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:24:34.704 05:43:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:34.704 05:43:38 -- common/autotest_common.sh@889 -- # local i 00:24:34.704 05:43:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:34.704 05:43:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:34.704 05:43:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:34.962 05:43:38 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:35.223 [ 00:24:35.223 { 00:24:35.223 "name": "BaseBdev2", 00:24:35.223 "aliases": [ 00:24:35.223 "a3edb9d7-d133-498d-86a7-b152a770be92" 00:24:35.223 ], 00:24:35.223 "product_name": "Malloc disk", 00:24:35.223 "block_size": 512, 00:24:35.223 "num_blocks": 65536, 00:24:35.223 "uuid": "a3edb9d7-d133-498d-86a7-b152a770be92", 00:24:35.223 "assigned_rate_limits": { 00:24:35.223 "rw_ios_per_sec": 0, 00:24:35.223 "rw_mbytes_per_sec": 0, 00:24:35.223 "r_mbytes_per_sec": 0, 00:24:35.223 "w_mbytes_per_sec": 0 00:24:35.223 }, 00:24:35.223 "claimed": true, 00:24:35.223 "claim_type": "exclusive_write", 00:24:35.223 "zoned": false, 00:24:35.223 "supported_io_types": { 00:24:35.223 "read": true, 00:24:35.223 "write": true, 00:24:35.223 "unmap": true, 00:24:35.223 "write_zeroes": true, 00:24:35.223 "flush": true, 00:24:35.223 "reset": true, 00:24:35.223 "compare": false, 00:24:35.223 "compare_and_write": false, 00:24:35.223 "abort": true, 00:24:35.223 "nvme_admin": false, 00:24:35.223 "nvme_io": false 00:24:35.223 }, 00:24:35.223 "memory_domains": [ 00:24:35.223 { 00:24:35.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.223 "dma_device_type": 2 00:24:35.223 } 00:24:35.223 ], 00:24:35.223 "driver_specific": {} 00:24:35.223 } 00:24:35.223 ] 00:24:35.223 05:43:39 -- common/autotest_common.sh@895 -- # return 0 00:24:35.223 05:43:39 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:35.223 05:43:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:35.223 05:43:39 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:35.223 05:43:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:35.223 05:43:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:35.223 05:43:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:35.223 05:43:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:35.223 05:43:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:35.223 05:43:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:35.223 05:43:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:35.223 05:43:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:35.223 05:43:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:35.223 05:43:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:35.223 05:43:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:35.482 05:43:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:35.482 "name": "Existed_Raid", 00:24:35.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:35.482 "strip_size_kb": 64, 00:24:35.482 "state": "configuring", 00:24:35.482 "raid_level": "raid5f", 00:24:35.482 "superblock": false, 00:24:35.482 "num_base_bdevs": 4, 00:24:35.482 "num_base_bdevs_discovered": 2, 00:24:35.482 "num_base_bdevs_operational": 4, 00:24:35.482 "base_bdevs_list": [ 00:24:35.482 { 00:24:35.482 "name": "BaseBdev1", 00:24:35.482 "uuid": "0b253817-1792-4d5b-b47e-8630578820da", 00:24:35.482 "is_configured": true, 00:24:35.482 "data_offset": 0, 00:24:35.482 "data_size": 65536 00:24:35.482 }, 00:24:35.482 { 00:24:35.482 "name": "BaseBdev2", 00:24:35.482 "uuid": "a3edb9d7-d133-498d-86a7-b152a770be92", 00:24:35.482 "is_configured": true, 00:24:35.482 "data_offset": 0, 00:24:35.482 "data_size": 65536 00:24:35.482 }, 00:24:35.482 { 00:24:35.482 "name": "BaseBdev3", 00:24:35.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:35.482 "is_configured": false, 00:24:35.482 "data_offset": 0, 00:24:35.482 "data_size": 0 00:24:35.482 }, 00:24:35.482 { 00:24:35.482 "name": "BaseBdev4", 00:24:35.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:35.482 "is_configured": false, 00:24:35.482 "data_offset": 0, 00:24:35.482 "data_size": 0 00:24:35.482 } 00:24:35.482 ] 00:24:35.482 }' 00:24:35.482 05:43:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:35.482 05:43:39 -- common/autotest_common.sh@10 -- # set +x 00:24:36.050 05:43:39 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:36.308 [2024-10-07 05:43:40.076294] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:36.308 BaseBdev3 00:24:36.308 05:43:40 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:24:36.308 05:43:40 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:24:36.308 05:43:40 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:36.308 05:43:40 -- common/autotest_common.sh@889 -- # local i 00:24:36.309 05:43:40 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:36.309 05:43:40 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:36.309 05:43:40 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:36.567 05:43:40 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:36.567 [ 00:24:36.567 { 00:24:36.567 "name": "BaseBdev3", 00:24:36.567 "aliases": [ 00:24:36.567 "ab370e2e-a3ec-4e63-a20b-41ec7aed0cfa" 00:24:36.567 ], 00:24:36.567 "product_name": "Malloc disk", 00:24:36.567 "block_size": 512, 00:24:36.567 "num_blocks": 65536, 00:24:36.567 "uuid": "ab370e2e-a3ec-4e63-a20b-41ec7aed0cfa", 00:24:36.567 "assigned_rate_limits": { 00:24:36.567 "rw_ios_per_sec": 0, 00:24:36.567 "rw_mbytes_per_sec": 0, 00:24:36.567 "r_mbytes_per_sec": 0, 00:24:36.567 "w_mbytes_per_sec": 0 00:24:36.567 }, 00:24:36.567 "claimed": true, 00:24:36.567 "claim_type": "exclusive_write", 00:24:36.567 "zoned": false, 00:24:36.567 "supported_io_types": { 00:24:36.567 "read": true, 00:24:36.567 "write": true, 00:24:36.567 "unmap": true, 00:24:36.567 "write_zeroes": true, 00:24:36.567 "flush": true, 00:24:36.567 "reset": true, 00:24:36.567 "compare": false, 00:24:36.567 "compare_and_write": false, 00:24:36.567 "abort": true, 00:24:36.567 "nvme_admin": false, 00:24:36.567 "nvme_io": false 00:24:36.567 }, 00:24:36.567 "memory_domains": [ 00:24:36.567 { 00:24:36.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:36.567 "dma_device_type": 2 00:24:36.567 } 00:24:36.567 ], 00:24:36.567 "driver_specific": {} 00:24:36.567 } 00:24:36.567 ] 00:24:36.568 05:43:40 -- common/autotest_common.sh@895 -- # return 0 00:24:36.568 05:43:40 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:36.568 05:43:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:36.568 05:43:40 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:36.568 05:43:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:36.568 05:43:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:36.568 05:43:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:36.568 05:43:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:36.568 05:43:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:36.568 05:43:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:36.568 05:43:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:36.568 05:43:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:36.568 05:43:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:36.568 05:43:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:36.568 05:43:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.826 05:43:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:36.826 "name": "Existed_Raid", 00:24:36.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.826 "strip_size_kb": 64, 00:24:36.826 "state": "configuring", 00:24:36.826 "raid_level": "raid5f", 00:24:36.826 "superblock": false, 00:24:36.826 "num_base_bdevs": 4, 00:24:36.826 "num_base_bdevs_discovered": 3, 00:24:36.826 "num_base_bdevs_operational": 4, 00:24:36.826 "base_bdevs_list": [ 00:24:36.826 { 00:24:36.826 "name": "BaseBdev1", 00:24:36.826 "uuid": "0b253817-1792-4d5b-b47e-8630578820da", 00:24:36.826 "is_configured": true, 00:24:36.826 "data_offset": 0, 00:24:36.826 "data_size": 65536 00:24:36.826 }, 00:24:36.826 { 00:24:36.826 "name": "BaseBdev2", 00:24:36.826 "uuid": "a3edb9d7-d133-498d-86a7-b152a770be92", 00:24:36.826 "is_configured": true, 00:24:36.826 "data_offset": 0, 00:24:36.826 "data_size": 65536 00:24:36.826 }, 00:24:36.826 { 00:24:36.826 "name": "BaseBdev3", 00:24:36.826 "uuid": "ab370e2e-a3ec-4e63-a20b-41ec7aed0cfa", 00:24:36.826 "is_configured": true, 00:24:36.826 "data_offset": 0, 00:24:36.826 "data_size": 65536 00:24:36.826 }, 00:24:36.826 { 00:24:36.826 "name": "BaseBdev4", 00:24:36.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.826 "is_configured": false, 00:24:36.826 "data_offset": 0, 00:24:36.826 "data_size": 0 00:24:36.826 } 00:24:36.826 ] 00:24:36.826 }' 00:24:36.826 05:43:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:36.826 05:43:40 -- common/autotest_common.sh@10 -- # set +x 00:24:37.392 05:43:41 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:37.651 [2024-10-07 05:43:41.508546] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:37.651 [2024-10-07 05:43:41.508612] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:24:37.651 [2024-10-07 05:43:41.508622] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:24:37.651 [2024-10-07 05:43:41.508759] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:24:37.651 [2024-10-07 05:43:41.514384] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:24:37.651 [2024-10-07 05:43:41.514408] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:24:37.651 [2024-10-07 05:43:41.514669] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:37.651 BaseBdev4 00:24:37.651 05:43:41 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:24:37.651 05:43:41 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:24:37.651 05:43:41 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:37.651 05:43:41 -- common/autotest_common.sh@889 -- # local i 00:24:37.651 05:43:41 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:37.651 05:43:41 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:37.651 05:43:41 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:37.909 05:43:41 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:38.167 [ 00:24:38.167 { 00:24:38.167 "name": "BaseBdev4", 00:24:38.167 "aliases": [ 00:24:38.167 "e64be915-af89-47d1-bff0-7f4c004782c1" 00:24:38.167 ], 00:24:38.167 "product_name": "Malloc disk", 00:24:38.167 "block_size": 512, 00:24:38.167 "num_blocks": 65536, 00:24:38.167 "uuid": "e64be915-af89-47d1-bff0-7f4c004782c1", 00:24:38.167 "assigned_rate_limits": { 00:24:38.167 "rw_ios_per_sec": 0, 00:24:38.167 "rw_mbytes_per_sec": 0, 00:24:38.167 "r_mbytes_per_sec": 0, 00:24:38.167 "w_mbytes_per_sec": 0 00:24:38.167 }, 00:24:38.167 "claimed": true, 00:24:38.167 "claim_type": "exclusive_write", 00:24:38.167 "zoned": false, 00:24:38.167 "supported_io_types": { 00:24:38.167 "read": true, 00:24:38.167 "write": true, 00:24:38.167 "unmap": true, 00:24:38.167 "write_zeroes": true, 00:24:38.167 "flush": true, 00:24:38.167 "reset": true, 00:24:38.167 "compare": false, 00:24:38.167 "compare_and_write": false, 00:24:38.167 "abort": true, 00:24:38.167 "nvme_admin": false, 00:24:38.167 "nvme_io": false 00:24:38.167 }, 00:24:38.167 "memory_domains": [ 00:24:38.167 { 00:24:38.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:38.167 "dma_device_type": 2 00:24:38.167 } 00:24:38.167 ], 00:24:38.167 "driver_specific": {} 00:24:38.167 } 00:24:38.167 ] 00:24:38.167 05:43:42 -- common/autotest_common.sh@895 -- # return 0 00:24:38.167 05:43:42 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:38.167 05:43:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:38.167 05:43:42 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:24:38.167 05:43:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:38.167 05:43:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:38.167 05:43:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:38.167 05:43:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:38.167 05:43:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:38.167 05:43:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:38.167 05:43:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:38.167 05:43:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:38.167 05:43:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:38.167 05:43:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.167 05:43:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:38.425 05:43:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:38.425 "name": "Existed_Raid", 00:24:38.425 "uuid": "1c6974d3-4671-4827-bf79-dfca11b1ed81", 00:24:38.425 "strip_size_kb": 64, 00:24:38.425 "state": "online", 00:24:38.425 "raid_level": "raid5f", 00:24:38.425 "superblock": false, 00:24:38.425 "num_base_bdevs": 4, 00:24:38.425 "num_base_bdevs_discovered": 4, 00:24:38.425 "num_base_bdevs_operational": 4, 00:24:38.425 "base_bdevs_list": [ 00:24:38.425 { 00:24:38.425 "name": "BaseBdev1", 00:24:38.425 "uuid": "0b253817-1792-4d5b-b47e-8630578820da", 00:24:38.426 "is_configured": true, 00:24:38.426 "data_offset": 0, 00:24:38.426 "data_size": 65536 00:24:38.426 }, 00:24:38.426 { 00:24:38.426 "name": "BaseBdev2", 00:24:38.426 "uuid": "a3edb9d7-d133-498d-86a7-b152a770be92", 00:24:38.426 "is_configured": true, 00:24:38.426 "data_offset": 0, 00:24:38.426 "data_size": 65536 00:24:38.426 }, 00:24:38.426 { 00:24:38.426 "name": "BaseBdev3", 00:24:38.426 "uuid": "ab370e2e-a3ec-4e63-a20b-41ec7aed0cfa", 00:24:38.426 "is_configured": true, 00:24:38.426 "data_offset": 0, 00:24:38.426 "data_size": 65536 00:24:38.426 }, 00:24:38.426 { 00:24:38.426 "name": "BaseBdev4", 00:24:38.426 "uuid": "e64be915-af89-47d1-bff0-7f4c004782c1", 00:24:38.426 "is_configured": true, 00:24:38.426 "data_offset": 0, 00:24:38.426 "data_size": 65536 00:24:38.426 } 00:24:38.426 ] 00:24:38.426 }' 00:24:38.426 05:43:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:38.426 05:43:42 -- common/autotest_common.sh@10 -- # set +x 00:24:38.993 05:43:42 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:39.252 [2024-10-07 05:43:43.061124] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:39.252 05:43:43 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:24:39.252 05:43:43 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:24:39.252 05:43:43 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:39.252 05:43:43 -- bdev/bdev_raid.sh@196 -- # return 0 00:24:39.252 05:43:43 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:24:39.252 05:43:43 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:24:39.252 05:43:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:39.252 05:43:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:39.252 05:43:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:39.252 05:43:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:39.252 05:43:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:39.252 05:43:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:39.252 05:43:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:39.252 05:43:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:39.252 05:43:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:39.252 05:43:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.252 05:43:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:39.510 05:43:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:39.510 "name": "Existed_Raid", 00:24:39.510 "uuid": "1c6974d3-4671-4827-bf79-dfca11b1ed81", 00:24:39.510 "strip_size_kb": 64, 00:24:39.510 "state": "online", 00:24:39.510 "raid_level": "raid5f", 00:24:39.510 "superblock": false, 00:24:39.510 "num_base_bdevs": 4, 00:24:39.510 "num_base_bdevs_discovered": 3, 00:24:39.510 "num_base_bdevs_operational": 3, 00:24:39.510 "base_bdevs_list": [ 00:24:39.510 { 00:24:39.510 "name": null, 00:24:39.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.510 "is_configured": false, 00:24:39.510 "data_offset": 0, 00:24:39.510 "data_size": 65536 00:24:39.510 }, 00:24:39.510 { 00:24:39.510 "name": "BaseBdev2", 00:24:39.510 "uuid": "a3edb9d7-d133-498d-86a7-b152a770be92", 00:24:39.510 "is_configured": true, 00:24:39.510 "data_offset": 0, 00:24:39.510 "data_size": 65536 00:24:39.510 }, 00:24:39.510 { 00:24:39.510 "name": "BaseBdev3", 00:24:39.510 "uuid": "ab370e2e-a3ec-4e63-a20b-41ec7aed0cfa", 00:24:39.510 "is_configured": true, 00:24:39.510 "data_offset": 0, 00:24:39.510 "data_size": 65536 00:24:39.510 }, 00:24:39.510 { 00:24:39.510 "name": "BaseBdev4", 00:24:39.510 "uuid": "e64be915-af89-47d1-bff0-7f4c004782c1", 00:24:39.510 "is_configured": true, 00:24:39.510 "data_offset": 0, 00:24:39.510 "data_size": 65536 00:24:39.510 } 00:24:39.510 ] 00:24:39.510 }' 00:24:39.510 05:43:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:39.510 05:43:43 -- common/autotest_common.sh@10 -- # set +x 00:24:40.077 05:43:43 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:24:40.077 05:43:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:40.077 05:43:43 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.077 05:43:43 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:40.336 05:43:44 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:40.336 05:43:44 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:40.336 05:43:44 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:40.595 [2024-10-07 05:43:44.332384] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:40.595 [2024-10-07 05:43:44.332421] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:40.595 [2024-10-07 05:43:44.332489] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:40.595 05:43:44 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:40.595 05:43:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:40.595 05:43:44 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.595 05:43:44 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:40.854 05:43:44 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:40.855 05:43:44 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:40.855 05:43:44 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:41.113 [2024-10-07 05:43:44.855824] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:41.113 05:43:44 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:41.113 05:43:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:41.113 05:43:44 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.113 05:43:44 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:41.371 05:43:45 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:41.372 05:43:45 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:41.372 05:43:45 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:41.372 [2024-10-07 05:43:45.342300] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:41.372 [2024-10-07 05:43:45.342357] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:24:41.630 05:43:45 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:41.630 05:43:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:41.630 05:43:45 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:24:41.630 05:43:45 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.888 05:43:45 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:24:41.888 05:43:45 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:24:41.888 05:43:45 -- bdev/bdev_raid.sh@287 -- # killprocess 173170 00:24:41.888 05:43:45 -- common/autotest_common.sh@926 -- # '[' -z 173170 ']' 00:24:41.888 05:43:45 -- common/autotest_common.sh@930 -- # kill -0 173170 00:24:41.888 05:43:45 -- common/autotest_common.sh@931 -- # uname 00:24:41.888 05:43:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:41.888 05:43:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 173170 00:24:41.888 05:43:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:41.888 05:43:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:41.888 05:43:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 173170' 00:24:41.888 killing process with pid 173170 00:24:41.888 05:43:45 -- common/autotest_common.sh@945 -- # kill 173170 00:24:41.888 05:43:45 -- common/autotest_common.sh@950 -- # wait 173170 00:24:41.888 [2024-10-07 05:43:45.629778] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:41.888 [2024-10-07 05:43:45.629924] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:42.825 ************************************ 00:24:42.825 END TEST raid5f_state_function_test 00:24:42.825 ************************************ 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@289 -- # return 0 00:24:42.825 00:24:42.825 real 0m13.329s 00:24:42.825 user 0m23.600s 00:24:42.825 sys 0m1.698s 00:24:42.825 05:43:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:42.825 05:43:46 -- common/autotest_common.sh@10 -- # set +x 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:24:42.825 05:43:46 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:24:42.825 05:43:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:42.825 05:43:46 -- common/autotest_common.sh@10 -- # set +x 00:24:42.825 ************************************ 00:24:42.825 START TEST raid5f_state_function_test_sb 00:24:42.825 ************************************ 00:24:42.825 05:43:46 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 4 true 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@226 -- # raid_pid=173596 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 173596' 00:24:42.825 Process raid pid: 173596 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:42.825 05:43:46 -- bdev/bdev_raid.sh@228 -- # waitforlisten 173596 /var/tmp/spdk-raid.sock 00:24:42.825 05:43:46 -- common/autotest_common.sh@819 -- # '[' -z 173596 ']' 00:24:42.825 05:43:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:42.825 05:43:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:42.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:42.825 05:43:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:42.825 05:43:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:42.825 05:43:46 -- common/autotest_common.sh@10 -- # set +x 00:24:42.825 [2024-10-07 05:43:46.732562] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:42.825 [2024-10-07 05:43:46.733366] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.085 [2024-10-07 05:43:46.909470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.344 [2024-10-07 05:43:47.142107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.603 [2024-10-07 05:43:47.332161] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:43.861 05:43:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:43.861 05:43:47 -- common/autotest_common.sh@852 -- # return 0 00:24:43.861 05:43:47 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:43.861 [2024-10-07 05:43:47.798723] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:43.861 [2024-10-07 05:43:47.798823] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:43.861 [2024-10-07 05:43:47.798837] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:43.861 [2024-10-07 05:43:47.798860] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:43.861 [2024-10-07 05:43:47.798867] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:43.861 [2024-10-07 05:43:47.798906] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:43.861 [2024-10-07 05:43:47.798916] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:43.861 [2024-10-07 05:43:47.798938] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:43.861 05:43:47 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:43.861 05:43:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:43.861 05:43:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:43.861 05:43:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:43.861 05:43:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:43.861 05:43:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:43.861 05:43:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:43.861 05:43:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:43.861 05:43:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:43.861 05:43:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:43.861 05:43:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:43.861 05:43:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:44.120 05:43:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:44.120 "name": "Existed_Raid", 00:24:44.120 "uuid": "0ce6c122-725d-411e-9e06-4513454c47b5", 00:24:44.120 "strip_size_kb": 64, 00:24:44.120 "state": "configuring", 00:24:44.120 "raid_level": "raid5f", 00:24:44.120 "superblock": true, 00:24:44.120 "num_base_bdevs": 4, 00:24:44.120 "num_base_bdevs_discovered": 0, 00:24:44.120 "num_base_bdevs_operational": 4, 00:24:44.120 "base_bdevs_list": [ 00:24:44.120 { 00:24:44.120 "name": "BaseBdev1", 00:24:44.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.120 "is_configured": false, 00:24:44.120 "data_offset": 0, 00:24:44.120 "data_size": 0 00:24:44.120 }, 00:24:44.120 { 00:24:44.120 "name": "BaseBdev2", 00:24:44.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.120 "is_configured": false, 00:24:44.120 "data_offset": 0, 00:24:44.120 "data_size": 0 00:24:44.120 }, 00:24:44.120 { 00:24:44.120 "name": "BaseBdev3", 00:24:44.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.120 "is_configured": false, 00:24:44.120 "data_offset": 0, 00:24:44.120 "data_size": 0 00:24:44.120 }, 00:24:44.120 { 00:24:44.120 "name": "BaseBdev4", 00:24:44.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.120 "is_configured": false, 00:24:44.120 "data_offset": 0, 00:24:44.120 "data_size": 0 00:24:44.120 } 00:24:44.120 ] 00:24:44.120 }' 00:24:44.120 05:43:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:44.120 05:43:48 -- common/autotest_common.sh@10 -- # set +x 00:24:44.685 05:43:48 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:44.943 [2024-10-07 05:43:48.842739] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:44.943 [2024-10-07 05:43:48.842773] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:24:44.943 05:43:48 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:45.201 [2024-10-07 05:43:49.090847] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:45.201 [2024-10-07 05:43:49.090908] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:45.201 [2024-10-07 05:43:49.090920] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:45.201 [2024-10-07 05:43:49.090945] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:45.201 [2024-10-07 05:43:49.090953] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:45.201 [2024-10-07 05:43:49.090989] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:45.201 [2024-10-07 05:43:49.090997] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:45.201 [2024-10-07 05:43:49.091019] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:45.201 05:43:49 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:45.460 [2024-10-07 05:43:49.312362] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:45.460 BaseBdev1 00:24:45.460 05:43:49 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:24:45.460 05:43:49 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:24:45.460 05:43:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:45.460 05:43:49 -- common/autotest_common.sh@889 -- # local i 00:24:45.460 05:43:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:45.460 05:43:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:45.460 05:43:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:45.719 05:43:49 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:45.979 [ 00:24:45.979 { 00:24:45.979 "name": "BaseBdev1", 00:24:45.979 "aliases": [ 00:24:45.979 "d4dea534-ee07-4e99-a9fd-2c706b83d94d" 00:24:45.979 ], 00:24:45.979 "product_name": "Malloc disk", 00:24:45.979 "block_size": 512, 00:24:45.979 "num_blocks": 65536, 00:24:45.979 "uuid": "d4dea534-ee07-4e99-a9fd-2c706b83d94d", 00:24:45.979 "assigned_rate_limits": { 00:24:45.979 "rw_ios_per_sec": 0, 00:24:45.979 "rw_mbytes_per_sec": 0, 00:24:45.979 "r_mbytes_per_sec": 0, 00:24:45.979 "w_mbytes_per_sec": 0 00:24:45.979 }, 00:24:45.979 "claimed": true, 00:24:45.979 "claim_type": "exclusive_write", 00:24:45.979 "zoned": false, 00:24:45.979 "supported_io_types": { 00:24:45.979 "read": true, 00:24:45.979 "write": true, 00:24:45.979 "unmap": true, 00:24:45.979 "write_zeroes": true, 00:24:45.979 "flush": true, 00:24:45.979 "reset": true, 00:24:45.979 "compare": false, 00:24:45.979 "compare_and_write": false, 00:24:45.979 "abort": true, 00:24:45.979 "nvme_admin": false, 00:24:45.979 "nvme_io": false 00:24:45.979 }, 00:24:45.979 "memory_domains": [ 00:24:45.979 { 00:24:45.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:45.979 "dma_device_type": 2 00:24:45.979 } 00:24:45.979 ], 00:24:45.979 "driver_specific": {} 00:24:45.979 } 00:24:45.979 ] 00:24:45.979 05:43:49 -- common/autotest_common.sh@895 -- # return 0 00:24:45.979 05:43:49 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:45.979 05:43:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:45.979 05:43:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:45.979 05:43:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:45.979 05:43:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:45.979 05:43:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:45.979 05:43:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:45.979 05:43:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:45.979 05:43:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:45.979 05:43:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:45.979 05:43:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:45.979 05:43:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.979 05:43:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:45.979 "name": "Existed_Raid", 00:24:45.979 "uuid": "c197a1dc-be7c-4ff7-bb83-706065751ed0", 00:24:45.979 "strip_size_kb": 64, 00:24:45.979 "state": "configuring", 00:24:45.979 "raid_level": "raid5f", 00:24:45.979 "superblock": true, 00:24:45.979 "num_base_bdevs": 4, 00:24:45.979 "num_base_bdevs_discovered": 1, 00:24:45.979 "num_base_bdevs_operational": 4, 00:24:45.979 "base_bdevs_list": [ 00:24:45.979 { 00:24:45.979 "name": "BaseBdev1", 00:24:45.979 "uuid": "d4dea534-ee07-4e99-a9fd-2c706b83d94d", 00:24:45.979 "is_configured": true, 00:24:45.979 "data_offset": 2048, 00:24:45.979 "data_size": 63488 00:24:45.979 }, 00:24:45.979 { 00:24:45.979 "name": "BaseBdev2", 00:24:45.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.979 "is_configured": false, 00:24:45.979 "data_offset": 0, 00:24:45.979 "data_size": 0 00:24:45.979 }, 00:24:45.979 { 00:24:45.979 "name": "BaseBdev3", 00:24:45.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.979 "is_configured": false, 00:24:45.979 "data_offset": 0, 00:24:45.979 "data_size": 0 00:24:45.979 }, 00:24:45.979 { 00:24:45.979 "name": "BaseBdev4", 00:24:45.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.979 "is_configured": false, 00:24:45.979 "data_offset": 0, 00:24:45.979 "data_size": 0 00:24:45.979 } 00:24:45.979 ] 00:24:45.979 }' 00:24:45.979 05:43:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:45.979 05:43:49 -- common/autotest_common.sh@10 -- # set +x 00:24:46.915 05:43:50 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:46.915 [2024-10-07 05:43:50.724581] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:46.915 [2024-10-07 05:43:50.724617] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:24:46.915 05:43:50 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:24:46.915 05:43:50 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:47.175 05:43:51 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:47.434 BaseBdev1 00:24:47.434 05:43:51 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:24:47.434 05:43:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:24:47.434 05:43:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:47.434 05:43:51 -- common/autotest_common.sh@889 -- # local i 00:24:47.434 05:43:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:47.434 05:43:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:47.434 05:43:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:47.692 05:43:51 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:47.952 [ 00:24:47.952 { 00:24:47.952 "name": "BaseBdev1", 00:24:47.952 "aliases": [ 00:24:47.952 "68f678f7-bf74-4e06-9743-e8d97cc3f0a9" 00:24:47.952 ], 00:24:47.952 "product_name": "Malloc disk", 00:24:47.952 "block_size": 512, 00:24:47.952 "num_blocks": 65536, 00:24:47.952 "uuid": "68f678f7-bf74-4e06-9743-e8d97cc3f0a9", 00:24:47.952 "assigned_rate_limits": { 00:24:47.952 "rw_ios_per_sec": 0, 00:24:47.952 "rw_mbytes_per_sec": 0, 00:24:47.952 "r_mbytes_per_sec": 0, 00:24:47.952 "w_mbytes_per_sec": 0 00:24:47.952 }, 00:24:47.952 "claimed": false, 00:24:47.952 "zoned": false, 00:24:47.952 "supported_io_types": { 00:24:47.952 "read": true, 00:24:47.952 "write": true, 00:24:47.952 "unmap": true, 00:24:47.952 "write_zeroes": true, 00:24:47.952 "flush": true, 00:24:47.952 "reset": true, 00:24:47.952 "compare": false, 00:24:47.952 "compare_and_write": false, 00:24:47.952 "abort": true, 00:24:47.952 "nvme_admin": false, 00:24:47.952 "nvme_io": false 00:24:47.952 }, 00:24:47.952 "memory_domains": [ 00:24:47.952 { 00:24:47.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:47.952 "dma_device_type": 2 00:24:47.952 } 00:24:47.952 ], 00:24:47.952 "driver_specific": {} 00:24:47.952 } 00:24:47.952 ] 00:24:47.952 05:43:51 -- common/autotest_common.sh@895 -- # return 0 00:24:47.952 05:43:51 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:48.211 [2024-10-07 05:43:51.931569] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:48.211 [2024-10-07 05:43:51.934014] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:48.211 [2024-10-07 05:43:51.934235] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:48.211 [2024-10-07 05:43:51.934354] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:48.211 [2024-10-07 05:43:51.934422] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:48.211 [2024-10-07 05:43:51.934533] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:48.211 [2024-10-07 05:43:51.934594] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:48.211 05:43:51 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:24:48.211 05:43:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:48.211 05:43:51 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:48.211 05:43:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:48.211 05:43:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:48.211 05:43:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:48.211 05:43:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:48.211 05:43:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:48.211 05:43:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:48.211 05:43:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:48.211 05:43:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:48.211 05:43:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:48.211 05:43:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.211 05:43:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:48.211 05:43:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:48.211 "name": "Existed_Raid", 00:24:48.211 "uuid": "ad944358-4a9e-481c-be9a-b1b142816462", 00:24:48.211 "strip_size_kb": 64, 00:24:48.211 "state": "configuring", 00:24:48.211 "raid_level": "raid5f", 00:24:48.211 "superblock": true, 00:24:48.211 "num_base_bdevs": 4, 00:24:48.211 "num_base_bdevs_discovered": 1, 00:24:48.211 "num_base_bdevs_operational": 4, 00:24:48.211 "base_bdevs_list": [ 00:24:48.211 { 00:24:48.211 "name": "BaseBdev1", 00:24:48.211 "uuid": "68f678f7-bf74-4e06-9743-e8d97cc3f0a9", 00:24:48.211 "is_configured": true, 00:24:48.211 "data_offset": 2048, 00:24:48.211 "data_size": 63488 00:24:48.211 }, 00:24:48.211 { 00:24:48.211 "name": "BaseBdev2", 00:24:48.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.211 "is_configured": false, 00:24:48.211 "data_offset": 0, 00:24:48.211 "data_size": 0 00:24:48.211 }, 00:24:48.211 { 00:24:48.211 "name": "BaseBdev3", 00:24:48.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.211 "is_configured": false, 00:24:48.211 "data_offset": 0, 00:24:48.211 "data_size": 0 00:24:48.211 }, 00:24:48.211 { 00:24:48.211 "name": "BaseBdev4", 00:24:48.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.211 "is_configured": false, 00:24:48.211 "data_offset": 0, 00:24:48.211 "data_size": 0 00:24:48.211 } 00:24:48.211 ] 00:24:48.211 }' 00:24:48.211 05:43:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:48.211 05:43:52 -- common/autotest_common.sh@10 -- # set +x 00:24:48.779 05:43:52 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:49.039 [2024-10-07 05:43:52.984524] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:49.039 BaseBdev2 00:24:49.039 05:43:52 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:24:49.039 05:43:52 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:24:49.039 05:43:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:49.039 05:43:52 -- common/autotest_common.sh@889 -- # local i 00:24:49.039 05:43:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:49.039 05:43:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:49.039 05:43:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:49.298 05:43:53 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:49.589 [ 00:24:49.589 { 00:24:49.589 "name": "BaseBdev2", 00:24:49.589 "aliases": [ 00:24:49.589 "66ea974f-4c66-4b7c-bf16-1a8588a9a086" 00:24:49.589 ], 00:24:49.589 "product_name": "Malloc disk", 00:24:49.589 "block_size": 512, 00:24:49.589 "num_blocks": 65536, 00:24:49.589 "uuid": "66ea974f-4c66-4b7c-bf16-1a8588a9a086", 00:24:49.589 "assigned_rate_limits": { 00:24:49.589 "rw_ios_per_sec": 0, 00:24:49.589 "rw_mbytes_per_sec": 0, 00:24:49.589 "r_mbytes_per_sec": 0, 00:24:49.589 "w_mbytes_per_sec": 0 00:24:49.589 }, 00:24:49.589 "claimed": true, 00:24:49.589 "claim_type": "exclusive_write", 00:24:49.589 "zoned": false, 00:24:49.589 "supported_io_types": { 00:24:49.589 "read": true, 00:24:49.589 "write": true, 00:24:49.589 "unmap": true, 00:24:49.589 "write_zeroes": true, 00:24:49.589 "flush": true, 00:24:49.589 "reset": true, 00:24:49.589 "compare": false, 00:24:49.589 "compare_and_write": false, 00:24:49.589 "abort": true, 00:24:49.589 "nvme_admin": false, 00:24:49.589 "nvme_io": false 00:24:49.589 }, 00:24:49.589 "memory_domains": [ 00:24:49.589 { 00:24:49.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:49.589 "dma_device_type": 2 00:24:49.589 } 00:24:49.589 ], 00:24:49.589 "driver_specific": {} 00:24:49.589 } 00:24:49.589 ] 00:24:49.589 05:43:53 -- common/autotest_common.sh@895 -- # return 0 00:24:49.589 05:43:53 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:49.589 05:43:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:49.589 05:43:53 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:49.589 05:43:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:49.589 05:43:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:49.589 05:43:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:49.589 05:43:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:49.589 05:43:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:49.589 05:43:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:49.589 05:43:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:49.589 05:43:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:49.589 05:43:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:49.589 05:43:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.589 05:43:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:49.856 05:43:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:49.856 "name": "Existed_Raid", 00:24:49.856 "uuid": "ad944358-4a9e-481c-be9a-b1b142816462", 00:24:49.856 "strip_size_kb": 64, 00:24:49.856 "state": "configuring", 00:24:49.856 "raid_level": "raid5f", 00:24:49.856 "superblock": true, 00:24:49.856 "num_base_bdevs": 4, 00:24:49.856 "num_base_bdevs_discovered": 2, 00:24:49.856 "num_base_bdevs_operational": 4, 00:24:49.856 "base_bdevs_list": [ 00:24:49.856 { 00:24:49.856 "name": "BaseBdev1", 00:24:49.856 "uuid": "68f678f7-bf74-4e06-9743-e8d97cc3f0a9", 00:24:49.856 "is_configured": true, 00:24:49.856 "data_offset": 2048, 00:24:49.856 "data_size": 63488 00:24:49.856 }, 00:24:49.856 { 00:24:49.856 "name": "BaseBdev2", 00:24:49.856 "uuid": "66ea974f-4c66-4b7c-bf16-1a8588a9a086", 00:24:49.857 "is_configured": true, 00:24:49.857 "data_offset": 2048, 00:24:49.857 "data_size": 63488 00:24:49.857 }, 00:24:49.857 { 00:24:49.857 "name": "BaseBdev3", 00:24:49.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.857 "is_configured": false, 00:24:49.857 "data_offset": 0, 00:24:49.857 "data_size": 0 00:24:49.857 }, 00:24:49.857 { 00:24:49.857 "name": "BaseBdev4", 00:24:49.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.857 "is_configured": false, 00:24:49.857 "data_offset": 0, 00:24:49.857 "data_size": 0 00:24:49.857 } 00:24:49.857 ] 00:24:49.857 }' 00:24:49.857 05:43:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:49.857 05:43:53 -- common/autotest_common.sh@10 -- # set +x 00:24:50.425 05:43:54 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:50.684 [2024-10-07 05:43:54.588442] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:50.684 BaseBdev3 00:24:50.684 05:43:54 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:24:50.684 05:43:54 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:24:50.684 05:43:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:50.684 05:43:54 -- common/autotest_common.sh@889 -- # local i 00:24:50.684 05:43:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:50.684 05:43:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:50.684 05:43:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:50.944 05:43:54 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:51.203 [ 00:24:51.203 { 00:24:51.203 "name": "BaseBdev3", 00:24:51.203 "aliases": [ 00:24:51.203 "4979cf84-8f31-4371-935a-cd7146a72ca9" 00:24:51.203 ], 00:24:51.203 "product_name": "Malloc disk", 00:24:51.203 "block_size": 512, 00:24:51.203 "num_blocks": 65536, 00:24:51.203 "uuid": "4979cf84-8f31-4371-935a-cd7146a72ca9", 00:24:51.203 "assigned_rate_limits": { 00:24:51.203 "rw_ios_per_sec": 0, 00:24:51.203 "rw_mbytes_per_sec": 0, 00:24:51.203 "r_mbytes_per_sec": 0, 00:24:51.203 "w_mbytes_per_sec": 0 00:24:51.203 }, 00:24:51.203 "claimed": true, 00:24:51.203 "claim_type": "exclusive_write", 00:24:51.203 "zoned": false, 00:24:51.203 "supported_io_types": { 00:24:51.203 "read": true, 00:24:51.203 "write": true, 00:24:51.203 "unmap": true, 00:24:51.203 "write_zeroes": true, 00:24:51.203 "flush": true, 00:24:51.203 "reset": true, 00:24:51.203 "compare": false, 00:24:51.203 "compare_and_write": false, 00:24:51.203 "abort": true, 00:24:51.203 "nvme_admin": false, 00:24:51.203 "nvme_io": false 00:24:51.203 }, 00:24:51.203 "memory_domains": [ 00:24:51.203 { 00:24:51.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:51.203 "dma_device_type": 2 00:24:51.203 } 00:24:51.203 ], 00:24:51.203 "driver_specific": {} 00:24:51.203 } 00:24:51.203 ] 00:24:51.203 05:43:55 -- common/autotest_common.sh@895 -- # return 0 00:24:51.203 05:43:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:51.203 05:43:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:51.203 05:43:55 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:51.203 05:43:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:51.203 05:43:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:51.203 05:43:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:51.203 05:43:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:51.203 05:43:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:51.203 05:43:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:51.203 05:43:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:51.203 05:43:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:51.203 05:43:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:51.203 05:43:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:51.203 05:43:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.463 05:43:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:51.463 "name": "Existed_Raid", 00:24:51.463 "uuid": "ad944358-4a9e-481c-be9a-b1b142816462", 00:24:51.463 "strip_size_kb": 64, 00:24:51.463 "state": "configuring", 00:24:51.463 "raid_level": "raid5f", 00:24:51.463 "superblock": true, 00:24:51.463 "num_base_bdevs": 4, 00:24:51.463 "num_base_bdevs_discovered": 3, 00:24:51.463 "num_base_bdevs_operational": 4, 00:24:51.463 "base_bdevs_list": [ 00:24:51.463 { 00:24:51.463 "name": "BaseBdev1", 00:24:51.463 "uuid": "68f678f7-bf74-4e06-9743-e8d97cc3f0a9", 00:24:51.463 "is_configured": true, 00:24:51.463 "data_offset": 2048, 00:24:51.463 "data_size": 63488 00:24:51.463 }, 00:24:51.463 { 00:24:51.463 "name": "BaseBdev2", 00:24:51.463 "uuid": "66ea974f-4c66-4b7c-bf16-1a8588a9a086", 00:24:51.463 "is_configured": true, 00:24:51.463 "data_offset": 2048, 00:24:51.463 "data_size": 63488 00:24:51.463 }, 00:24:51.463 { 00:24:51.463 "name": "BaseBdev3", 00:24:51.463 "uuid": "4979cf84-8f31-4371-935a-cd7146a72ca9", 00:24:51.463 "is_configured": true, 00:24:51.463 "data_offset": 2048, 00:24:51.463 "data_size": 63488 00:24:51.463 }, 00:24:51.463 { 00:24:51.463 "name": "BaseBdev4", 00:24:51.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.463 "is_configured": false, 00:24:51.463 "data_offset": 0, 00:24:51.463 "data_size": 0 00:24:51.463 } 00:24:51.463 ] 00:24:51.463 }' 00:24:51.463 05:43:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:51.463 05:43:55 -- common/autotest_common.sh@10 -- # set +x 00:24:52.031 05:43:55 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:52.289 [2024-10-07 05:43:56.097119] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:52.290 [2024-10-07 05:43:56.097528] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:24:52.290 [2024-10-07 05:43:56.097656] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:52.290 [2024-10-07 05:43:56.097812] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:24:52.290 BaseBdev4 00:24:52.290 [2024-10-07 05:43:56.103806] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:24:52.290 [2024-10-07 05:43:56.103950] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:24:52.290 [2024-10-07 05:43:56.104221] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:52.290 05:43:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:24:52.290 05:43:56 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:24:52.290 05:43:56 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:52.290 05:43:56 -- common/autotest_common.sh@889 -- # local i 00:24:52.290 05:43:56 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:52.290 05:43:56 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:52.290 05:43:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:52.548 05:43:56 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:52.548 [ 00:24:52.548 { 00:24:52.548 "name": "BaseBdev4", 00:24:52.548 "aliases": [ 00:24:52.548 "b76483c9-4d39-49c5-b9e1-643ea3a215be" 00:24:52.548 ], 00:24:52.548 "product_name": "Malloc disk", 00:24:52.548 "block_size": 512, 00:24:52.548 "num_blocks": 65536, 00:24:52.548 "uuid": "b76483c9-4d39-49c5-b9e1-643ea3a215be", 00:24:52.548 "assigned_rate_limits": { 00:24:52.548 "rw_ios_per_sec": 0, 00:24:52.548 "rw_mbytes_per_sec": 0, 00:24:52.548 "r_mbytes_per_sec": 0, 00:24:52.548 "w_mbytes_per_sec": 0 00:24:52.548 }, 00:24:52.548 "claimed": true, 00:24:52.548 "claim_type": "exclusive_write", 00:24:52.548 "zoned": false, 00:24:52.548 "supported_io_types": { 00:24:52.548 "read": true, 00:24:52.548 "write": true, 00:24:52.548 "unmap": true, 00:24:52.548 "write_zeroes": true, 00:24:52.548 "flush": true, 00:24:52.548 "reset": true, 00:24:52.548 "compare": false, 00:24:52.548 "compare_and_write": false, 00:24:52.548 "abort": true, 00:24:52.548 "nvme_admin": false, 00:24:52.548 "nvme_io": false 00:24:52.548 }, 00:24:52.548 "memory_domains": [ 00:24:52.548 { 00:24:52.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:52.548 "dma_device_type": 2 00:24:52.548 } 00:24:52.548 ], 00:24:52.548 "driver_specific": {} 00:24:52.548 } 00:24:52.548 ] 00:24:52.807 05:43:56 -- common/autotest_common.sh@895 -- # return 0 00:24:52.807 05:43:56 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:52.807 05:43:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:52.807 05:43:56 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:24:52.807 05:43:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:52.807 05:43:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:52.807 05:43:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:52.807 05:43:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:52.807 05:43:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:52.807 05:43:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:52.807 05:43:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:52.807 05:43:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:52.807 05:43:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:52.807 05:43:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:52.807 05:43:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:52.807 05:43:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:52.807 "name": "Existed_Raid", 00:24:52.807 "uuid": "ad944358-4a9e-481c-be9a-b1b142816462", 00:24:52.807 "strip_size_kb": 64, 00:24:52.807 "state": "online", 00:24:52.807 "raid_level": "raid5f", 00:24:52.807 "superblock": true, 00:24:52.807 "num_base_bdevs": 4, 00:24:52.807 "num_base_bdevs_discovered": 4, 00:24:52.807 "num_base_bdevs_operational": 4, 00:24:52.807 "base_bdevs_list": [ 00:24:52.807 { 00:24:52.807 "name": "BaseBdev1", 00:24:52.807 "uuid": "68f678f7-bf74-4e06-9743-e8d97cc3f0a9", 00:24:52.807 "is_configured": true, 00:24:52.807 "data_offset": 2048, 00:24:52.807 "data_size": 63488 00:24:52.807 }, 00:24:52.807 { 00:24:52.807 "name": "BaseBdev2", 00:24:52.807 "uuid": "66ea974f-4c66-4b7c-bf16-1a8588a9a086", 00:24:52.807 "is_configured": true, 00:24:52.807 "data_offset": 2048, 00:24:52.807 "data_size": 63488 00:24:52.807 }, 00:24:52.807 { 00:24:52.807 "name": "BaseBdev3", 00:24:52.807 "uuid": "4979cf84-8f31-4371-935a-cd7146a72ca9", 00:24:52.807 "is_configured": true, 00:24:52.807 "data_offset": 2048, 00:24:52.807 "data_size": 63488 00:24:52.807 }, 00:24:52.807 { 00:24:52.807 "name": "BaseBdev4", 00:24:52.807 "uuid": "b76483c9-4d39-49c5-b9e1-643ea3a215be", 00:24:52.807 "is_configured": true, 00:24:52.807 "data_offset": 2048, 00:24:52.807 "data_size": 63488 00:24:52.807 } 00:24:52.807 ] 00:24:52.807 }' 00:24:52.807 05:43:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:52.807 05:43:56 -- common/autotest_common.sh@10 -- # set +x 00:24:53.374 05:43:57 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:53.634 [2024-10-07 05:43:57.458731] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:53.634 05:43:57 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:24:53.634 05:43:57 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:24:53.634 05:43:57 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:53.634 05:43:57 -- bdev/bdev_raid.sh@196 -- # return 0 00:24:53.634 05:43:57 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:24:53.634 05:43:57 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:24:53.634 05:43:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:53.634 05:43:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:53.634 05:43:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:53.634 05:43:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:53.634 05:43:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:53.634 05:43:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:53.634 05:43:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:53.634 05:43:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:53.634 05:43:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:53.634 05:43:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.634 05:43:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:53.894 05:43:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:53.894 "name": "Existed_Raid", 00:24:53.894 "uuid": "ad944358-4a9e-481c-be9a-b1b142816462", 00:24:53.894 "strip_size_kb": 64, 00:24:53.894 "state": "online", 00:24:53.894 "raid_level": "raid5f", 00:24:53.894 "superblock": true, 00:24:53.894 "num_base_bdevs": 4, 00:24:53.894 "num_base_bdevs_discovered": 3, 00:24:53.894 "num_base_bdevs_operational": 3, 00:24:53.894 "base_bdevs_list": [ 00:24:53.894 { 00:24:53.894 "name": null, 00:24:53.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.894 "is_configured": false, 00:24:53.894 "data_offset": 2048, 00:24:53.894 "data_size": 63488 00:24:53.894 }, 00:24:53.894 { 00:24:53.894 "name": "BaseBdev2", 00:24:53.894 "uuid": "66ea974f-4c66-4b7c-bf16-1a8588a9a086", 00:24:53.894 "is_configured": true, 00:24:53.894 "data_offset": 2048, 00:24:53.894 "data_size": 63488 00:24:53.894 }, 00:24:53.894 { 00:24:53.894 "name": "BaseBdev3", 00:24:53.894 "uuid": "4979cf84-8f31-4371-935a-cd7146a72ca9", 00:24:53.894 "is_configured": true, 00:24:53.894 "data_offset": 2048, 00:24:53.894 "data_size": 63488 00:24:53.894 }, 00:24:53.894 { 00:24:53.894 "name": "BaseBdev4", 00:24:53.894 "uuid": "b76483c9-4d39-49c5-b9e1-643ea3a215be", 00:24:53.894 "is_configured": true, 00:24:53.894 "data_offset": 2048, 00:24:53.894 "data_size": 63488 00:24:53.894 } 00:24:53.894 ] 00:24:53.894 }' 00:24:53.894 05:43:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:53.894 05:43:57 -- common/autotest_common.sh@10 -- # set +x 00:24:54.461 05:43:58 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:24:54.461 05:43:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:54.461 05:43:58 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.461 05:43:58 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:54.719 05:43:58 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:54.719 05:43:58 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:54.719 05:43:58 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:54.978 [2024-10-07 05:43:58.894632] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:54.978 [2024-10-07 05:43:58.894805] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:54.978 [2024-10-07 05:43:58.894972] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:55.237 05:43:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:55.237 05:43:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:55.237 05:43:58 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.237 05:43:58 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:55.237 05:43:59 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:55.237 05:43:59 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:55.237 05:43:59 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:55.496 [2024-10-07 05:43:59.396539] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:55.755 05:43:59 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:55.755 05:43:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:55.755 05:43:59 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.755 05:43:59 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:55.755 05:43:59 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:55.755 05:43:59 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:55.755 05:43:59 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:56.014 [2024-10-07 05:43:59.923127] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:56.014 [2024-10-07 05:43:59.923373] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:24:56.273 05:43:59 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:56.273 05:43:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:56.273 05:44:00 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.273 05:44:00 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:24:56.273 05:44:00 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:24:56.273 05:44:00 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:24:56.273 05:44:00 -- bdev/bdev_raid.sh@287 -- # killprocess 173596 00:24:56.273 05:44:00 -- common/autotest_common.sh@926 -- # '[' -z 173596 ']' 00:24:56.273 05:44:00 -- common/autotest_common.sh@930 -- # kill -0 173596 00:24:56.273 05:44:00 -- common/autotest_common.sh@931 -- # uname 00:24:56.273 05:44:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:56.273 05:44:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 173596 00:24:56.532 05:44:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:56.532 05:44:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:56.532 05:44:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 173596' 00:24:56.532 killing process with pid 173596 00:24:56.532 05:44:00 -- common/autotest_common.sh@945 -- # kill 173596 00:24:56.532 05:44:00 -- common/autotest_common.sh@950 -- # wait 173596 00:24:56.532 [2024-10-07 05:44:00.255905] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:56.532 [2024-10-07 05:44:00.256024] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:57.470 ************************************ 00:24:57.470 END TEST raid5f_state_function_test_sb 00:24:57.470 ************************************ 00:24:57.470 05:44:01 -- bdev/bdev_raid.sh@289 -- # return 0 00:24:57.470 00:24:57.470 real 0m14.622s 00:24:57.470 user 0m25.721s 00:24:57.470 sys 0m1.961s 00:24:57.470 05:44:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:57.470 05:44:01 -- common/autotest_common.sh@10 -- # set +x 00:24:57.470 05:44:01 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:24:57.470 05:44:01 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:24:57.470 05:44:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:57.470 05:44:01 -- common/autotest_common.sh@10 -- # set +x 00:24:57.470 ************************************ 00:24:57.470 START TEST raid5f_superblock_test 00:24:57.470 ************************************ 00:24:57.470 05:44:01 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid5f 4 00:24:57.470 05:44:01 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:24:57.470 05:44:01 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:24:57.470 05:44:01 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:24:57.470 05:44:01 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:24:57.470 05:44:01 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:24:57.470 05:44:01 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:24:57.470 05:44:01 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:24:57.470 05:44:01 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:24:57.470 05:44:01 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:24:57.470 05:44:01 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:24:57.470 05:44:01 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:24:57.470 05:44:01 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:24:57.470 05:44:01 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:24:57.470 05:44:01 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:24:57.470 05:44:01 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:24:57.470 05:44:01 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:24:57.470 05:44:01 -- bdev/bdev_raid.sh@357 -- # raid_pid=174039 00:24:57.470 05:44:01 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:24:57.470 05:44:01 -- bdev/bdev_raid.sh@358 -- # waitforlisten 174039 /var/tmp/spdk-raid.sock 00:24:57.470 05:44:01 -- common/autotest_common.sh@819 -- # '[' -z 174039 ']' 00:24:57.470 05:44:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:57.470 05:44:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:57.470 05:44:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:57.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:57.470 05:44:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:57.470 05:44:01 -- common/autotest_common.sh@10 -- # set +x 00:24:57.470 [2024-10-07 05:44:01.414566] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:24:57.470 [2024-10-07 05:44:01.414974] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174039 ] 00:24:57.730 [2024-10-07 05:44:01.582270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.989 [2024-10-07 05:44:01.764406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.989 [2024-10-07 05:44:01.950512] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:58.557 05:44:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:58.557 05:44:02 -- common/autotest_common.sh@852 -- # return 0 00:24:58.557 05:44:02 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:24:58.557 05:44:02 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:58.557 05:44:02 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:24:58.557 05:44:02 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:24:58.557 05:44:02 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:58.557 05:44:02 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:58.557 05:44:02 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:58.557 05:44:02 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:58.557 05:44:02 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:24:58.816 malloc1 00:24:58.816 05:44:02 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:59.075 [2024-10-07 05:44:02.822381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:59.075 [2024-10-07 05:44:02.822765] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:59.075 [2024-10-07 05:44:02.822841] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:24:59.075 [2024-10-07 05:44:02.823230] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:59.075 [2024-10-07 05:44:02.825666] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:59.075 [2024-10-07 05:44:02.825856] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:59.075 pt1 00:24:59.075 05:44:02 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:59.075 05:44:02 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:59.075 05:44:02 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:24:59.075 05:44:02 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:24:59.075 05:44:02 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:59.075 05:44:02 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:59.075 05:44:02 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:59.075 05:44:02 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:59.075 05:44:02 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:24:59.334 malloc2 00:24:59.334 05:44:03 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:59.334 [2024-10-07 05:44:03.248642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:59.334 [2024-10-07 05:44:03.248877] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:59.334 [2024-10-07 05:44:03.248964] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:24:59.334 [2024-10-07 05:44:03.249129] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:59.334 [2024-10-07 05:44:03.251512] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:59.334 [2024-10-07 05:44:03.251691] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:59.334 pt2 00:24:59.334 05:44:03 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:59.334 05:44:03 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:59.334 05:44:03 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:24:59.334 05:44:03 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:24:59.334 05:44:03 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:24:59.334 05:44:03 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:59.334 05:44:03 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:59.334 05:44:03 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:59.334 05:44:03 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:24:59.593 malloc3 00:24:59.593 05:44:03 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:59.852 [2024-10-07 05:44:03.649563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:59.852 [2024-10-07 05:44:03.649780] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:59.852 [2024-10-07 05:44:03.649867] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:24:59.852 [2024-10-07 05:44:03.650008] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:59.852 [2024-10-07 05:44:03.652281] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:59.852 [2024-10-07 05:44:03.652435] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:59.852 pt3 00:24:59.852 05:44:03 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:59.852 05:44:03 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:59.852 05:44:03 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:24:59.852 05:44:03 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:24:59.852 05:44:03 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:24:59.852 05:44:03 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:59.852 05:44:03 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:59.852 05:44:03 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:59.852 05:44:03 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:25:00.111 malloc4 00:25:00.111 05:44:03 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:00.111 [2024-10-07 05:44:04.054882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:00.111 [2024-10-07 05:44:04.055096] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:00.111 [2024-10-07 05:44:04.055184] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:25:00.111 [2024-10-07 05:44:04.055330] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:00.111 [2024-10-07 05:44:04.057729] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:00.111 [2024-10-07 05:44:04.057901] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:00.111 pt4 00:25:00.111 05:44:04 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:25:00.111 05:44:04 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:00.111 05:44:04 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:25:00.370 [2024-10-07 05:44:04.242970] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:00.370 [2024-10-07 05:44:04.245208] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:00.370 [2024-10-07 05:44:04.245386] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:00.370 [2024-10-07 05:44:04.245505] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:00.370 [2024-10-07 05:44:04.245860] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:25:00.370 [2024-10-07 05:44:04.245987] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:00.370 [2024-10-07 05:44:04.246164] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:25:00.370 [2024-10-07 05:44:04.251873] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:25:00.370 [2024-10-07 05:44:04.252004] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:25:00.370 [2024-10-07 05:44:04.252286] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:00.370 05:44:04 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:00.370 05:44:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:00.370 05:44:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:00.370 05:44:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:00.370 05:44:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:00.370 05:44:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:00.370 05:44:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:00.370 05:44:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:00.370 05:44:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:00.370 05:44:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:00.370 05:44:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.370 05:44:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.633 05:44:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:00.633 "name": "raid_bdev1", 00:25:00.633 "uuid": "a640d10e-17bc-42ca-997f-591305300672", 00:25:00.633 "strip_size_kb": 64, 00:25:00.633 "state": "online", 00:25:00.633 "raid_level": "raid5f", 00:25:00.633 "superblock": true, 00:25:00.633 "num_base_bdevs": 4, 00:25:00.633 "num_base_bdevs_discovered": 4, 00:25:00.633 "num_base_bdevs_operational": 4, 00:25:00.633 "base_bdevs_list": [ 00:25:00.633 { 00:25:00.633 "name": "pt1", 00:25:00.633 "uuid": "6e3b49aa-cf35-5b35-8673-37bef53c08d0", 00:25:00.633 "is_configured": true, 00:25:00.633 "data_offset": 2048, 00:25:00.633 "data_size": 63488 00:25:00.633 }, 00:25:00.633 { 00:25:00.633 "name": "pt2", 00:25:00.633 "uuid": "63e10dab-d5bc-5e67-82d6-d2ec577f6fe6", 00:25:00.633 "is_configured": true, 00:25:00.633 "data_offset": 2048, 00:25:00.633 "data_size": 63488 00:25:00.633 }, 00:25:00.633 { 00:25:00.633 "name": "pt3", 00:25:00.633 "uuid": "dbde2bc6-7121-50cd-b479-23a9424c0255", 00:25:00.633 "is_configured": true, 00:25:00.633 "data_offset": 2048, 00:25:00.633 "data_size": 63488 00:25:00.633 }, 00:25:00.633 { 00:25:00.633 "name": "pt4", 00:25:00.633 "uuid": "7d3be9d8-f034-57d0-b13c-4f6269ce6105", 00:25:00.633 "is_configured": true, 00:25:00.633 "data_offset": 2048, 00:25:00.633 "data_size": 63488 00:25:00.633 } 00:25:00.633 ] 00:25:00.633 }' 00:25:00.633 05:44:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:00.633 05:44:04 -- common/autotest_common.sh@10 -- # set +x 00:25:01.205 05:44:05 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:25:01.205 05:44:05 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:01.464 [2024-10-07 05:44:05.342946] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:01.464 05:44:05 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=a640d10e-17bc-42ca-997f-591305300672 00:25:01.464 05:44:05 -- bdev/bdev_raid.sh@380 -- # '[' -z a640d10e-17bc-42ca-997f-591305300672 ']' 00:25:01.464 05:44:05 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:01.723 [2024-10-07 05:44:05.594864] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:01.723 [2024-10-07 05:44:05.595011] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:01.723 [2024-10-07 05:44:05.595185] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:01.723 [2024-10-07 05:44:05.595382] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:01.723 [2024-10-07 05:44:05.595488] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:25:01.723 05:44:05 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.723 05:44:05 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:25:01.983 05:44:05 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:25:01.983 05:44:05 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:25:01.983 05:44:05 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:25:01.983 05:44:05 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:02.242 05:44:06 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:25:02.242 05:44:06 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:02.501 05:44:06 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:25:02.501 05:44:06 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:02.760 05:44:06 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:25:02.760 05:44:06 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:02.760 05:44:06 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:25:02.760 05:44:06 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:03.019 05:44:06 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:25:03.019 05:44:06 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:03.019 05:44:06 -- common/autotest_common.sh@640 -- # local es=0 00:25:03.019 05:44:06 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:03.019 05:44:06 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:03.019 05:44:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:03.019 05:44:06 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:03.019 05:44:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:03.019 05:44:06 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:03.019 05:44:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:03.019 05:44:06 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:03.019 05:44:06 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:03.019 05:44:06 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:03.279 [2024-10-07 05:44:07.139110] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:03.279 [2024-10-07 05:44:07.141237] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:03.279 [2024-10-07 05:44:07.141436] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:25:03.279 [2024-10-07 05:44:07.141516] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:25:03.279 [2024-10-07 05:44:07.141710] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:25:03.279 [2024-10-07 05:44:07.141933] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:25:03.279 [2024-10-07 05:44:07.142084] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:25:03.279 [2024-10-07 05:44:07.142262] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:25:03.279 [2024-10-07 05:44:07.142326] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:03.279 [2024-10-07 05:44:07.142435] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring 00:25:03.279 request: 00:25:03.279 { 00:25:03.279 "name": "raid_bdev1", 00:25:03.279 "raid_level": "raid5f", 00:25:03.279 "base_bdevs": [ 00:25:03.279 "malloc1", 00:25:03.279 "malloc2", 00:25:03.279 "malloc3", 00:25:03.279 "malloc4" 00:25:03.279 ], 00:25:03.279 "superblock": false, 00:25:03.279 "strip_size_kb": 64, 00:25:03.279 "method": "bdev_raid_create", 00:25:03.279 "req_id": 1 00:25:03.279 } 00:25:03.279 Got JSON-RPC error response 00:25:03.279 response: 00:25:03.279 { 00:25:03.279 "code": -17, 00:25:03.279 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:03.279 } 00:25:03.279 05:44:07 -- common/autotest_common.sh@643 -- # es=1 00:25:03.279 05:44:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:03.279 05:44:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:03.279 05:44:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:03.279 05:44:07 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.279 05:44:07 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:25:03.540 05:44:07 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:25:03.540 05:44:07 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:25:03.540 05:44:07 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:03.540 [2024-10-07 05:44:07.511144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:03.540 [2024-10-07 05:44:07.511346] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:03.540 [2024-10-07 05:44:07.511418] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:25:03.540 [2024-10-07 05:44:07.511559] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:03.540 [2024-10-07 05:44:07.514121] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:03.540 [2024-10-07 05:44:07.514361] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:03.540 [2024-10-07 05:44:07.514692] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:25:03.540 [2024-10-07 05:44:07.514871] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:03.540 pt1 00:25:03.830 05:44:07 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:25:03.830 05:44:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:03.830 05:44:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:03.830 05:44:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:03.830 05:44:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:03.830 05:44:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:03.830 05:44:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:03.830 05:44:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:03.830 05:44:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:03.830 05:44:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:03.830 05:44:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.830 05:44:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.830 05:44:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:03.830 "name": "raid_bdev1", 00:25:03.830 "uuid": "a640d10e-17bc-42ca-997f-591305300672", 00:25:03.830 "strip_size_kb": 64, 00:25:03.830 "state": "configuring", 00:25:03.830 "raid_level": "raid5f", 00:25:03.830 "superblock": true, 00:25:03.830 "num_base_bdevs": 4, 00:25:03.830 "num_base_bdevs_discovered": 1, 00:25:03.830 "num_base_bdevs_operational": 4, 00:25:03.830 "base_bdevs_list": [ 00:25:03.830 { 00:25:03.830 "name": "pt1", 00:25:03.830 "uuid": "6e3b49aa-cf35-5b35-8673-37bef53c08d0", 00:25:03.830 "is_configured": true, 00:25:03.830 "data_offset": 2048, 00:25:03.830 "data_size": 63488 00:25:03.830 }, 00:25:03.830 { 00:25:03.830 "name": null, 00:25:03.830 "uuid": "63e10dab-d5bc-5e67-82d6-d2ec577f6fe6", 00:25:03.830 "is_configured": false, 00:25:03.830 "data_offset": 2048, 00:25:03.830 "data_size": 63488 00:25:03.830 }, 00:25:03.830 { 00:25:03.830 "name": null, 00:25:03.830 "uuid": "dbde2bc6-7121-50cd-b479-23a9424c0255", 00:25:03.830 "is_configured": false, 00:25:03.830 "data_offset": 2048, 00:25:03.830 "data_size": 63488 00:25:03.830 }, 00:25:03.830 { 00:25:03.830 "name": null, 00:25:03.830 "uuid": "7d3be9d8-f034-57d0-b13c-4f6269ce6105", 00:25:03.830 "is_configured": false, 00:25:03.830 "data_offset": 2048, 00:25:03.830 "data_size": 63488 00:25:03.830 } 00:25:03.830 ] 00:25:03.830 }' 00:25:03.830 05:44:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:03.830 05:44:07 -- common/autotest_common.sh@10 -- # set +x 00:25:04.410 05:44:08 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:25:04.410 05:44:08 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:04.667 [2024-10-07 05:44:08.511407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:04.667 [2024-10-07 05:44:08.511606] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:04.667 [2024-10-07 05:44:08.511685] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:25:04.667 [2024-10-07 05:44:08.511811] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:04.667 [2024-10-07 05:44:08.512280] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:04.667 [2024-10-07 05:44:08.512457] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:04.667 [2024-10-07 05:44:08.512650] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:04.667 [2024-10-07 05:44:08.512778] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:04.667 pt2 00:25:04.667 05:44:08 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:04.924 [2024-10-07 05:44:08.699459] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:25:04.924 05:44:08 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:25:04.924 05:44:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:04.924 05:44:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:04.924 05:44:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:04.924 05:44:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:04.924 05:44:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:04.924 05:44:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:04.924 05:44:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:04.924 05:44:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:04.924 05:44:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:04.924 05:44:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.924 05:44:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:05.182 05:44:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:05.182 "name": "raid_bdev1", 00:25:05.182 "uuid": "a640d10e-17bc-42ca-997f-591305300672", 00:25:05.182 "strip_size_kb": 64, 00:25:05.182 "state": "configuring", 00:25:05.182 "raid_level": "raid5f", 00:25:05.182 "superblock": true, 00:25:05.182 "num_base_bdevs": 4, 00:25:05.182 "num_base_bdevs_discovered": 1, 00:25:05.182 "num_base_bdevs_operational": 4, 00:25:05.182 "base_bdevs_list": [ 00:25:05.182 { 00:25:05.182 "name": "pt1", 00:25:05.182 "uuid": "6e3b49aa-cf35-5b35-8673-37bef53c08d0", 00:25:05.182 "is_configured": true, 00:25:05.182 "data_offset": 2048, 00:25:05.182 "data_size": 63488 00:25:05.182 }, 00:25:05.182 { 00:25:05.182 "name": null, 00:25:05.182 "uuid": "63e10dab-d5bc-5e67-82d6-d2ec577f6fe6", 00:25:05.182 "is_configured": false, 00:25:05.182 "data_offset": 2048, 00:25:05.182 "data_size": 63488 00:25:05.182 }, 00:25:05.182 { 00:25:05.182 "name": null, 00:25:05.182 "uuid": "dbde2bc6-7121-50cd-b479-23a9424c0255", 00:25:05.182 "is_configured": false, 00:25:05.182 "data_offset": 2048, 00:25:05.182 "data_size": 63488 00:25:05.182 }, 00:25:05.182 { 00:25:05.182 "name": null, 00:25:05.182 "uuid": "7d3be9d8-f034-57d0-b13c-4f6269ce6105", 00:25:05.182 "is_configured": false, 00:25:05.182 "data_offset": 2048, 00:25:05.182 "data_size": 63488 00:25:05.182 } 00:25:05.182 ] 00:25:05.182 }' 00:25:05.182 05:44:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:05.182 05:44:08 -- common/autotest_common.sh@10 -- # set +x 00:25:05.750 05:44:09 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:25:05.750 05:44:09 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:05.750 05:44:09 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:05.750 [2024-10-07 05:44:09.707713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:05.750 [2024-10-07 05:44:09.707912] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:05.750 [2024-10-07 05:44:09.707988] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:25:05.750 [2024-10-07 05:44:09.708114] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:05.750 [2024-10-07 05:44:09.708560] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:05.750 [2024-10-07 05:44:09.708746] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:05.750 [2024-10-07 05:44:09.708951] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:05.750 [2024-10-07 05:44:09.709076] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:05.750 pt2 00:25:05.750 05:44:09 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:25:05.750 05:44:09 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:05.750 05:44:09 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:06.008 [2024-10-07 05:44:09.971769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:06.009 [2024-10-07 05:44:09.971980] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:06.009 [2024-10-07 05:44:09.972048] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:06.009 [2024-10-07 05:44:09.972178] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:06.009 [2024-10-07 05:44:09.972642] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:06.009 [2024-10-07 05:44:09.972838] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:06.009 [2024-10-07 05:44:09.973032] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:25:06.009 [2024-10-07 05:44:09.973160] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:06.009 pt3 00:25:06.267 05:44:09 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:25:06.267 05:44:09 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:06.267 05:44:09 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:06.267 [2024-10-07 05:44:10.159817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:06.267 [2024-10-07 05:44:10.160027] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:06.267 [2024-10-07 05:44:10.160098] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:25:06.267 [2024-10-07 05:44:10.160228] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:06.267 [2024-10-07 05:44:10.160676] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:06.267 [2024-10-07 05:44:10.160866] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:06.267 [2024-10-07 05:44:10.161065] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:25:06.267 [2024-10-07 05:44:10.161186] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:06.267 [2024-10-07 05:44:10.161370] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:25:06.267 [2024-10-07 05:44:10.161493] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:06.268 [2024-10-07 05:44:10.161619] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:06.268 [2024-10-07 05:44:10.167120] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:25:06.268 [2024-10-07 05:44:10.167262] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:25:06.268 [2024-10-07 05:44:10.167599] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:06.268 pt4 00:25:06.268 05:44:10 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:25:06.268 05:44:10 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:06.268 05:44:10 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:06.268 05:44:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:06.268 05:44:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:06.268 05:44:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:06.268 05:44:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:06.268 05:44:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:06.268 05:44:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:06.268 05:44:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:06.268 05:44:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:06.268 05:44:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:06.268 05:44:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.268 05:44:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.527 05:44:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:06.527 "name": "raid_bdev1", 00:25:06.527 "uuid": "a640d10e-17bc-42ca-997f-591305300672", 00:25:06.527 "strip_size_kb": 64, 00:25:06.527 "state": "online", 00:25:06.527 "raid_level": "raid5f", 00:25:06.527 "superblock": true, 00:25:06.527 "num_base_bdevs": 4, 00:25:06.527 "num_base_bdevs_discovered": 4, 00:25:06.527 "num_base_bdevs_operational": 4, 00:25:06.527 "base_bdevs_list": [ 00:25:06.527 { 00:25:06.527 "name": "pt1", 00:25:06.527 "uuid": "6e3b49aa-cf35-5b35-8673-37bef53c08d0", 00:25:06.527 "is_configured": true, 00:25:06.527 "data_offset": 2048, 00:25:06.527 "data_size": 63488 00:25:06.527 }, 00:25:06.527 { 00:25:06.527 "name": "pt2", 00:25:06.527 "uuid": "63e10dab-d5bc-5e67-82d6-d2ec577f6fe6", 00:25:06.527 "is_configured": true, 00:25:06.527 "data_offset": 2048, 00:25:06.527 "data_size": 63488 00:25:06.527 }, 00:25:06.527 { 00:25:06.527 "name": "pt3", 00:25:06.527 "uuid": "dbde2bc6-7121-50cd-b479-23a9424c0255", 00:25:06.527 "is_configured": true, 00:25:06.527 "data_offset": 2048, 00:25:06.527 "data_size": 63488 00:25:06.527 }, 00:25:06.527 { 00:25:06.527 "name": "pt4", 00:25:06.527 "uuid": "7d3be9d8-f034-57d0-b13c-4f6269ce6105", 00:25:06.527 "is_configured": true, 00:25:06.527 "data_offset": 2048, 00:25:06.527 "data_size": 63488 00:25:06.527 } 00:25:06.527 ] 00:25:06.527 }' 00:25:06.527 05:44:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:06.527 05:44:10 -- common/autotest_common.sh@10 -- # set +x 00:25:07.094 05:44:11 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:07.094 05:44:11 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:25:07.352 [2024-10-07 05:44:11.302431] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:07.353 05:44:11 -- bdev/bdev_raid.sh@430 -- # '[' a640d10e-17bc-42ca-997f-591305300672 '!=' a640d10e-17bc-42ca-997f-591305300672 ']' 00:25:07.353 05:44:11 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:25:07.353 05:44:11 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:25:07.353 05:44:11 -- bdev/bdev_raid.sh@196 -- # return 0 00:25:07.353 05:44:11 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:07.611 [2024-10-07 05:44:11.554393] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:07.612 05:44:11 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:07.612 05:44:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:07.612 05:44:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:07.612 05:44:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:07.612 05:44:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:07.612 05:44:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:07.612 05:44:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:07.612 05:44:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:07.612 05:44:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:07.612 05:44:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:07.612 05:44:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.612 05:44:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:07.870 05:44:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:07.870 "name": "raid_bdev1", 00:25:07.870 "uuid": "a640d10e-17bc-42ca-997f-591305300672", 00:25:07.870 "strip_size_kb": 64, 00:25:07.870 "state": "online", 00:25:07.870 "raid_level": "raid5f", 00:25:07.870 "superblock": true, 00:25:07.870 "num_base_bdevs": 4, 00:25:07.870 "num_base_bdevs_discovered": 3, 00:25:07.870 "num_base_bdevs_operational": 3, 00:25:07.870 "base_bdevs_list": [ 00:25:07.870 { 00:25:07.870 "name": null, 00:25:07.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.870 "is_configured": false, 00:25:07.870 "data_offset": 2048, 00:25:07.870 "data_size": 63488 00:25:07.871 }, 00:25:07.871 { 00:25:07.871 "name": "pt2", 00:25:07.871 "uuid": "63e10dab-d5bc-5e67-82d6-d2ec577f6fe6", 00:25:07.871 "is_configured": true, 00:25:07.871 "data_offset": 2048, 00:25:07.871 "data_size": 63488 00:25:07.871 }, 00:25:07.871 { 00:25:07.871 "name": "pt3", 00:25:07.871 "uuid": "dbde2bc6-7121-50cd-b479-23a9424c0255", 00:25:07.871 "is_configured": true, 00:25:07.871 "data_offset": 2048, 00:25:07.871 "data_size": 63488 00:25:07.871 }, 00:25:07.871 { 00:25:07.871 "name": "pt4", 00:25:07.871 "uuid": "7d3be9d8-f034-57d0-b13c-4f6269ce6105", 00:25:07.871 "is_configured": true, 00:25:07.871 "data_offset": 2048, 00:25:07.871 "data_size": 63488 00:25:07.871 } 00:25:07.871 ] 00:25:07.871 }' 00:25:07.871 05:44:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:07.871 05:44:11 -- common/autotest_common.sh@10 -- # set +x 00:25:08.438 05:44:12 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:08.697 [2024-10-07 05:44:12.538553] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:08.697 [2024-10-07 05:44:12.538698] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:08.697 [2024-10-07 05:44:12.538847] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:08.697 [2024-10-07 05:44:12.538953] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:08.697 [2024-10-07 05:44:12.539145] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:25:08.697 05:44:12 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:08.697 05:44:12 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:25:08.956 05:44:12 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:25:08.956 05:44:12 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:25:08.956 05:44:12 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:25:08.956 05:44:12 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:08.956 05:44:12 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:09.215 05:44:12 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:25:09.215 05:44:12 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:09.215 05:44:12 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:09.473 05:44:13 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:25:09.473 05:44:13 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:09.473 05:44:13 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:09.473 05:44:13 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:25:09.473 05:44:13 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:09.473 05:44:13 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:25:09.473 05:44:13 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:25:09.473 05:44:13 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:09.732 [2024-10-07 05:44:13.606775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:09.732 [2024-10-07 05:44:13.607013] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:09.732 [2024-10-07 05:44:13.607089] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:25:09.732 [2024-10-07 05:44:13.607369] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:09.732 [2024-10-07 05:44:13.609856] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:09.732 [2024-10-07 05:44:13.610054] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:09.732 [2024-10-07 05:44:13.610294] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:09.732 [2024-10-07 05:44:13.610446] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:09.732 pt2 00:25:09.732 05:44:13 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:09.732 05:44:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:09.732 05:44:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:09.732 05:44:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:09.732 05:44:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:09.732 05:44:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:09.732 05:44:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:09.732 05:44:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:09.732 05:44:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:09.732 05:44:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:09.732 05:44:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.732 05:44:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.991 05:44:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:09.991 "name": "raid_bdev1", 00:25:09.991 "uuid": "a640d10e-17bc-42ca-997f-591305300672", 00:25:09.991 "strip_size_kb": 64, 00:25:09.991 "state": "configuring", 00:25:09.991 "raid_level": "raid5f", 00:25:09.991 "superblock": true, 00:25:09.991 "num_base_bdevs": 4, 00:25:09.991 "num_base_bdevs_discovered": 1, 00:25:09.991 "num_base_bdevs_operational": 3, 00:25:09.991 "base_bdevs_list": [ 00:25:09.991 { 00:25:09.991 "name": null, 00:25:09.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.991 "is_configured": false, 00:25:09.991 "data_offset": 2048, 00:25:09.991 "data_size": 63488 00:25:09.991 }, 00:25:09.991 { 00:25:09.991 "name": "pt2", 00:25:09.991 "uuid": "63e10dab-d5bc-5e67-82d6-d2ec577f6fe6", 00:25:09.991 "is_configured": true, 00:25:09.991 "data_offset": 2048, 00:25:09.991 "data_size": 63488 00:25:09.991 }, 00:25:09.991 { 00:25:09.991 "name": null, 00:25:09.991 "uuid": "dbde2bc6-7121-50cd-b479-23a9424c0255", 00:25:09.991 "is_configured": false, 00:25:09.991 "data_offset": 2048, 00:25:09.991 "data_size": 63488 00:25:09.991 }, 00:25:09.991 { 00:25:09.991 "name": null, 00:25:09.991 "uuid": "7d3be9d8-f034-57d0-b13c-4f6269ce6105", 00:25:09.991 "is_configured": false, 00:25:09.991 "data_offset": 2048, 00:25:09.991 "data_size": 63488 00:25:09.991 } 00:25:09.991 ] 00:25:09.991 }' 00:25:09.991 05:44:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:09.991 05:44:13 -- common/autotest_common.sh@10 -- # set +x 00:25:10.558 05:44:14 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:25:10.558 05:44:14 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:25:10.558 05:44:14 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:10.817 [2024-10-07 05:44:14.642984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:10.817 [2024-10-07 05:44:14.643217] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:10.817 [2024-10-07 05:44:14.643303] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:25:10.817 [2024-10-07 05:44:14.643459] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:10.817 [2024-10-07 05:44:14.643979] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:10.817 [2024-10-07 05:44:14.644161] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:10.817 [2024-10-07 05:44:14.644367] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:25:10.817 [2024-10-07 05:44:14.644514] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:10.817 pt3 00:25:10.817 05:44:14 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:10.817 05:44:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:10.817 05:44:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:10.817 05:44:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:10.817 05:44:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:10.817 05:44:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:10.817 05:44:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:10.817 05:44:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:10.817 05:44:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:10.817 05:44:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:10.817 05:44:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.817 05:44:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:11.076 05:44:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:11.076 "name": "raid_bdev1", 00:25:11.076 "uuid": "a640d10e-17bc-42ca-997f-591305300672", 00:25:11.076 "strip_size_kb": 64, 00:25:11.076 "state": "configuring", 00:25:11.076 "raid_level": "raid5f", 00:25:11.076 "superblock": true, 00:25:11.076 "num_base_bdevs": 4, 00:25:11.076 "num_base_bdevs_discovered": 2, 00:25:11.076 "num_base_bdevs_operational": 3, 00:25:11.076 "base_bdevs_list": [ 00:25:11.076 { 00:25:11.076 "name": null, 00:25:11.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.076 "is_configured": false, 00:25:11.076 "data_offset": 2048, 00:25:11.076 "data_size": 63488 00:25:11.076 }, 00:25:11.076 { 00:25:11.076 "name": "pt2", 00:25:11.076 "uuid": "63e10dab-d5bc-5e67-82d6-d2ec577f6fe6", 00:25:11.076 "is_configured": true, 00:25:11.076 "data_offset": 2048, 00:25:11.076 "data_size": 63488 00:25:11.076 }, 00:25:11.076 { 00:25:11.076 "name": "pt3", 00:25:11.076 "uuid": "dbde2bc6-7121-50cd-b479-23a9424c0255", 00:25:11.076 "is_configured": true, 00:25:11.076 "data_offset": 2048, 00:25:11.076 "data_size": 63488 00:25:11.076 }, 00:25:11.076 { 00:25:11.076 "name": null, 00:25:11.076 "uuid": "7d3be9d8-f034-57d0-b13c-4f6269ce6105", 00:25:11.076 "is_configured": false, 00:25:11.076 "data_offset": 2048, 00:25:11.076 "data_size": 63488 00:25:11.076 } 00:25:11.076 ] 00:25:11.076 }' 00:25:11.076 05:44:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:11.076 05:44:14 -- common/autotest_common.sh@10 -- # set +x 00:25:11.643 05:44:15 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:25:11.643 05:44:15 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:25:11.643 05:44:15 -- bdev/bdev_raid.sh@462 -- # i=3 00:25:11.643 05:44:15 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:11.901 [2024-10-07 05:44:15.735210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:11.902 [2024-10-07 05:44:15.735427] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:11.902 [2024-10-07 05:44:15.735583] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:25:11.902 [2024-10-07 05:44:15.735710] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:11.902 [2024-10-07 05:44:15.736282] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:11.902 [2024-10-07 05:44:15.736451] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:11.902 [2024-10-07 05:44:15.736645] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:25:11.902 [2024-10-07 05:44:15.736776] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:11.902 [2024-10-07 05:44:15.737040] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ba80 00:25:11.902 [2024-10-07 05:44:15.737155] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:11.902 [2024-10-07 05:44:15.737309] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:25:11.902 [2024-10-07 05:44:15.742945] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ba80 00:25:11.902 [2024-10-07 05:44:15.743100] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ba80 00:25:11.902 [2024-10-07 05:44:15.743532] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:11.902 pt4 00:25:11.902 05:44:15 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:11.902 05:44:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:11.902 05:44:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:11.902 05:44:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:11.902 05:44:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:11.902 05:44:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:11.902 05:44:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:11.902 05:44:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:11.902 05:44:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:11.902 05:44:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:11.902 05:44:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:11.902 05:44:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:12.160 05:44:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:12.160 "name": "raid_bdev1", 00:25:12.160 "uuid": "a640d10e-17bc-42ca-997f-591305300672", 00:25:12.160 "strip_size_kb": 64, 00:25:12.160 "state": "online", 00:25:12.160 "raid_level": "raid5f", 00:25:12.160 "superblock": true, 00:25:12.160 "num_base_bdevs": 4, 00:25:12.160 "num_base_bdevs_discovered": 3, 00:25:12.160 "num_base_bdevs_operational": 3, 00:25:12.160 "base_bdevs_list": [ 00:25:12.160 { 00:25:12.160 "name": null, 00:25:12.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.160 "is_configured": false, 00:25:12.160 "data_offset": 2048, 00:25:12.160 "data_size": 63488 00:25:12.160 }, 00:25:12.160 { 00:25:12.160 "name": "pt2", 00:25:12.160 "uuid": "63e10dab-d5bc-5e67-82d6-d2ec577f6fe6", 00:25:12.160 "is_configured": true, 00:25:12.160 "data_offset": 2048, 00:25:12.160 "data_size": 63488 00:25:12.160 }, 00:25:12.160 { 00:25:12.160 "name": "pt3", 00:25:12.160 "uuid": "dbde2bc6-7121-50cd-b479-23a9424c0255", 00:25:12.160 "is_configured": true, 00:25:12.160 "data_offset": 2048, 00:25:12.160 "data_size": 63488 00:25:12.160 }, 00:25:12.160 { 00:25:12.160 "name": "pt4", 00:25:12.160 "uuid": "7d3be9d8-f034-57d0-b13c-4f6269ce6105", 00:25:12.160 "is_configured": true, 00:25:12.160 "data_offset": 2048, 00:25:12.160 "data_size": 63488 00:25:12.160 } 00:25:12.160 ] 00:25:12.160 }' 00:25:12.160 05:44:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:12.161 05:44:15 -- common/autotest_common.sh@10 -- # set +x 00:25:12.728 05:44:16 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:25:12.728 05:44:16 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:12.987 [2024-10-07 05:44:16.766311] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:12.987 [2024-10-07 05:44:16.766456] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:12.987 [2024-10-07 05:44:16.766642] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:12.987 [2024-10-07 05:44:16.766850] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:12.987 [2024-10-07 05:44:16.766976] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state offline 00:25:12.987 05:44:16 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:12.987 05:44:16 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:25:13.247 05:44:17 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:25:13.247 05:44:17 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:25:13.247 05:44:17 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:13.506 [2024-10-07 05:44:17.274406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:13.506 [2024-10-07 05:44:17.274630] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:13.506 [2024-10-07 05:44:17.274710] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:25:13.506 [2024-10-07 05:44:17.274837] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:13.506 [2024-10-07 05:44:17.277267] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:13.506 [2024-10-07 05:44:17.277464] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:13.506 [2024-10-07 05:44:17.277692] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:25:13.506 [2024-10-07 05:44:17.277839] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:13.506 pt1 00:25:13.506 05:44:17 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:25:13.506 05:44:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:13.506 05:44:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:13.506 05:44:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:13.506 05:44:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:13.506 05:44:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:13.506 05:44:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:13.506 05:44:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:13.506 05:44:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:13.506 05:44:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:13.506 05:44:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.506 05:44:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.765 05:44:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:13.765 "name": "raid_bdev1", 00:25:13.765 "uuid": "a640d10e-17bc-42ca-997f-591305300672", 00:25:13.765 "strip_size_kb": 64, 00:25:13.765 "state": "configuring", 00:25:13.765 "raid_level": "raid5f", 00:25:13.765 "superblock": true, 00:25:13.765 "num_base_bdevs": 4, 00:25:13.765 "num_base_bdevs_discovered": 1, 00:25:13.765 "num_base_bdevs_operational": 4, 00:25:13.765 "base_bdevs_list": [ 00:25:13.765 { 00:25:13.765 "name": "pt1", 00:25:13.765 "uuid": "6e3b49aa-cf35-5b35-8673-37bef53c08d0", 00:25:13.765 "is_configured": true, 00:25:13.765 "data_offset": 2048, 00:25:13.765 "data_size": 63488 00:25:13.765 }, 00:25:13.765 { 00:25:13.765 "name": null, 00:25:13.765 "uuid": "63e10dab-d5bc-5e67-82d6-d2ec577f6fe6", 00:25:13.765 "is_configured": false, 00:25:13.765 "data_offset": 2048, 00:25:13.765 "data_size": 63488 00:25:13.765 }, 00:25:13.765 { 00:25:13.765 "name": null, 00:25:13.765 "uuid": "dbde2bc6-7121-50cd-b479-23a9424c0255", 00:25:13.765 "is_configured": false, 00:25:13.765 "data_offset": 2048, 00:25:13.765 "data_size": 63488 00:25:13.765 }, 00:25:13.765 { 00:25:13.765 "name": null, 00:25:13.765 "uuid": "7d3be9d8-f034-57d0-b13c-4f6269ce6105", 00:25:13.765 "is_configured": false, 00:25:13.765 "data_offset": 2048, 00:25:13.765 "data_size": 63488 00:25:13.765 } 00:25:13.765 ] 00:25:13.765 }' 00:25:13.765 05:44:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:13.765 05:44:17 -- common/autotest_common.sh@10 -- # set +x 00:25:14.332 05:44:18 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:25:14.332 05:44:18 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:14.332 05:44:18 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:14.589 05:44:18 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:25:14.589 05:44:18 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:14.589 05:44:18 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:14.847 05:44:18 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:25:14.847 05:44:18 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:14.847 05:44:18 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:14.847 05:44:18 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:25:14.847 05:44:18 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:14.847 05:44:18 -- bdev/bdev_raid.sh@489 -- # i=3 00:25:14.847 05:44:18 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:15.105 [2024-10-07 05:44:18.990810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:15.105 [2024-10-07 05:44:18.991079] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:15.105 [2024-10-07 05:44:18.991158] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:25:15.105 [2024-10-07 05:44:18.991433] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:15.105 [2024-10-07 05:44:18.992008] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:15.105 [2024-10-07 05:44:18.992198] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:15.105 [2024-10-07 05:44:18.992418] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:25:15.105 [2024-10-07 05:44:18.992540] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:15.105 [2024-10-07 05:44:18.992643] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:15.105 [2024-10-07 05:44:18.992779] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c980 name raid_bdev1, state configuring 00:25:15.105 [2024-10-07 05:44:18.992984] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:15.105 pt4 00:25:15.105 05:44:19 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:15.105 05:44:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:15.105 05:44:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:15.105 05:44:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:15.105 05:44:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:15.105 05:44:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:15.105 05:44:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:15.105 05:44:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:15.105 05:44:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:15.105 05:44:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:15.105 05:44:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.105 05:44:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.363 05:44:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:15.363 "name": "raid_bdev1", 00:25:15.363 "uuid": "a640d10e-17bc-42ca-997f-591305300672", 00:25:15.363 "strip_size_kb": 64, 00:25:15.363 "state": "configuring", 00:25:15.363 "raid_level": "raid5f", 00:25:15.363 "superblock": true, 00:25:15.363 "num_base_bdevs": 4, 00:25:15.363 "num_base_bdevs_discovered": 1, 00:25:15.363 "num_base_bdevs_operational": 3, 00:25:15.363 "base_bdevs_list": [ 00:25:15.363 { 00:25:15.363 "name": null, 00:25:15.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.363 "is_configured": false, 00:25:15.363 "data_offset": 2048, 00:25:15.363 "data_size": 63488 00:25:15.363 }, 00:25:15.363 { 00:25:15.363 "name": null, 00:25:15.363 "uuid": "63e10dab-d5bc-5e67-82d6-d2ec577f6fe6", 00:25:15.363 "is_configured": false, 00:25:15.363 "data_offset": 2048, 00:25:15.363 "data_size": 63488 00:25:15.363 }, 00:25:15.363 { 00:25:15.363 "name": null, 00:25:15.363 "uuid": "dbde2bc6-7121-50cd-b479-23a9424c0255", 00:25:15.363 "is_configured": false, 00:25:15.363 "data_offset": 2048, 00:25:15.363 "data_size": 63488 00:25:15.363 }, 00:25:15.363 { 00:25:15.363 "name": "pt4", 00:25:15.363 "uuid": "7d3be9d8-f034-57d0-b13c-4f6269ce6105", 00:25:15.363 "is_configured": true, 00:25:15.363 "data_offset": 2048, 00:25:15.363 "data_size": 63488 00:25:15.363 } 00:25:15.363 ] 00:25:15.363 }' 00:25:15.363 05:44:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:15.363 05:44:19 -- common/autotest_common.sh@10 -- # set +x 00:25:15.930 05:44:19 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:25:15.930 05:44:19 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:25:15.930 05:44:19 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:16.189 [2024-10-07 05:44:20.071076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:16.189 [2024-10-07 05:44:20.071335] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:16.189 [2024-10-07 05:44:20.071420] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d280 00:25:16.189 [2024-10-07 05:44:20.071644] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:16.189 [2024-10-07 05:44:20.072179] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:16.189 [2024-10-07 05:44:20.072372] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:16.189 [2024-10-07 05:44:20.072611] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:16.189 [2024-10-07 05:44:20.072756] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:16.189 pt2 00:25:16.189 05:44:20 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:25:16.189 05:44:20 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:25:16.189 05:44:20 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:16.447 [2024-10-07 05:44:20.323147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:16.447 [2024-10-07 05:44:20.323358] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:16.447 [2024-10-07 05:44:20.323431] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:25:16.447 [2024-10-07 05:44:20.323561] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:16.447 [2024-10-07 05:44:20.324013] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:16.447 [2024-10-07 05:44:20.324220] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:16.447 [2024-10-07 05:44:20.324420] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:25:16.447 [2024-10-07 05:44:20.324563] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:16.447 [2024-10-07 05:44:20.324730] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000cf80 00:25:16.447 [2024-10-07 05:44:20.324838] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:16.447 [2024-10-07 05:44:20.324977] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:25:16.448 [2024-10-07 05:44:20.330761] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000cf80 00:25:16.448 [2024-10-07 05:44:20.330905] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000cf80 00:25:16.448 [2024-10-07 05:44:20.331261] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:16.448 pt3 00:25:16.448 05:44:20 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:25:16.448 05:44:20 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:25:16.448 05:44:20 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:16.448 05:44:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:16.448 05:44:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:16.448 05:44:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:16.448 05:44:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:16.448 05:44:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:16.448 05:44:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:16.448 05:44:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:16.448 05:44:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:16.448 05:44:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:16.448 05:44:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.448 05:44:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.706 05:44:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:16.706 "name": "raid_bdev1", 00:25:16.706 "uuid": "a640d10e-17bc-42ca-997f-591305300672", 00:25:16.707 "strip_size_kb": 64, 00:25:16.707 "state": "online", 00:25:16.707 "raid_level": "raid5f", 00:25:16.707 "superblock": true, 00:25:16.707 "num_base_bdevs": 4, 00:25:16.707 "num_base_bdevs_discovered": 3, 00:25:16.707 "num_base_bdevs_operational": 3, 00:25:16.707 "base_bdevs_list": [ 00:25:16.707 { 00:25:16.707 "name": null, 00:25:16.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.707 "is_configured": false, 00:25:16.707 "data_offset": 2048, 00:25:16.707 "data_size": 63488 00:25:16.707 }, 00:25:16.707 { 00:25:16.707 "name": "pt2", 00:25:16.707 "uuid": "63e10dab-d5bc-5e67-82d6-d2ec577f6fe6", 00:25:16.707 "is_configured": true, 00:25:16.707 "data_offset": 2048, 00:25:16.707 "data_size": 63488 00:25:16.707 }, 00:25:16.707 { 00:25:16.707 "name": "pt3", 00:25:16.707 "uuid": "dbde2bc6-7121-50cd-b479-23a9424c0255", 00:25:16.707 "is_configured": true, 00:25:16.707 "data_offset": 2048, 00:25:16.707 "data_size": 63488 00:25:16.707 }, 00:25:16.707 { 00:25:16.707 "name": "pt4", 00:25:16.707 "uuid": "7d3be9d8-f034-57d0-b13c-4f6269ce6105", 00:25:16.707 "is_configured": true, 00:25:16.707 "data_offset": 2048, 00:25:16.707 "data_size": 63488 00:25:16.707 } 00:25:16.707 ] 00:25:16.707 }' 00:25:16.707 05:44:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:16.707 05:44:20 -- common/autotest_common.sh@10 -- # set +x 00:25:17.275 05:44:21 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:17.275 05:44:21 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:25:17.534 [2024-10-07 05:44:21.362186] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:17.534 05:44:21 -- bdev/bdev_raid.sh@506 -- # '[' a640d10e-17bc-42ca-997f-591305300672 '!=' a640d10e-17bc-42ca-997f-591305300672 ']' 00:25:17.534 05:44:21 -- bdev/bdev_raid.sh@511 -- # killprocess 174039 00:25:17.534 05:44:21 -- common/autotest_common.sh@926 -- # '[' -z 174039 ']' 00:25:17.534 05:44:21 -- common/autotest_common.sh@930 -- # kill -0 174039 00:25:17.534 05:44:21 -- common/autotest_common.sh@931 -- # uname 00:25:17.534 05:44:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:17.534 05:44:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 174039 00:25:17.534 killing process with pid 174039 00:25:17.534 05:44:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:17.534 05:44:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:17.534 05:44:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 174039' 00:25:17.534 05:44:21 -- common/autotest_common.sh@945 -- # kill 174039 00:25:17.534 [2024-10-07 05:44:21.410825] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:17.534 05:44:21 -- common/autotest_common.sh@950 -- # wait 174039 00:25:17.534 [2024-10-07 05:44:21.410943] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:17.534 [2024-10-07 05:44:21.411050] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:17.534 [2024-10-07 05:44:21.411061] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cf80 name raid_bdev1, state offline 00:25:17.795 [2024-10-07 05:44:21.682906] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:18.762 05:44:22 -- bdev/bdev_raid.sh@513 -- # return 0 00:25:18.762 00:25:18.762 real 0m21.375s 00:25:18.762 user 0m39.068s 00:25:18.762 sys 0m2.614s 00:25:18.762 05:44:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:18.762 ************************************ 00:25:18.762 END TEST raid5f_superblock_test 00:25:18.762 ************************************ 00:25:18.762 05:44:22 -- common/autotest_common.sh@10 -- # set +x 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false 00:25:19.021 05:44:22 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:25:19.021 05:44:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:19.021 05:44:22 -- common/autotest_common.sh@10 -- # set +x 00:25:19.021 ************************************ 00:25:19.021 START TEST raid5f_rebuild_test 00:25:19.021 ************************************ 00:25:19.021 05:44:22 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 4 false false 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@544 -- # raid_pid=174706 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@545 -- # waitforlisten 174706 /var/tmp/spdk-raid.sock 00:25:19.021 05:44:22 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:19.021 05:44:22 -- common/autotest_common.sh@819 -- # '[' -z 174706 ']' 00:25:19.021 05:44:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:19.021 05:44:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:19.021 05:44:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:19.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:19.021 05:44:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:19.022 05:44:22 -- common/autotest_common.sh@10 -- # set +x 00:25:19.022 [2024-10-07 05:44:22.859137] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:25:19.022 [2024-10-07 05:44:22.859470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174706 ] 00:25:19.022 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:19.022 Zero copy mechanism will not be used. 00:25:19.281 [2024-10-07 05:44:23.012990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.281 [2024-10-07 05:44:23.201731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.540 [2024-10-07 05:44:23.388902] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:20.107 05:44:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:20.107 05:44:23 -- common/autotest_common.sh@852 -- # return 0 00:25:20.107 05:44:23 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:20.107 05:44:23 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:20.107 05:44:23 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:20.107 BaseBdev1 00:25:20.107 05:44:24 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:20.107 05:44:24 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:20.107 05:44:24 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:20.676 BaseBdev2 00:25:20.676 05:44:24 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:20.676 05:44:24 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:20.676 05:44:24 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:20.676 BaseBdev3 00:25:20.676 05:44:24 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:20.676 05:44:24 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:20.676 05:44:24 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:20.936 BaseBdev4 00:25:20.936 05:44:24 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:21.194 spare_malloc 00:25:21.194 05:44:25 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:21.453 spare_delay 00:25:21.453 05:44:25 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:21.711 [2024-10-07 05:44:25.524180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:21.711 [2024-10-07 05:44:25.524411] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:21.711 [2024-10-07 05:44:25.524490] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:25:21.711 [2024-10-07 05:44:25.524686] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:21.711 [2024-10-07 05:44:25.527236] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:21.712 [2024-10-07 05:44:25.527438] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:21.712 spare 00:25:21.712 05:44:25 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:21.970 [2024-10-07 05:44:25.716267] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:21.970 [2024-10-07 05:44:25.718428] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:21.970 [2024-10-07 05:44:25.718618] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:21.970 [2024-10-07 05:44:25.718702] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:21.970 [2024-10-07 05:44:25.718932] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:25:21.970 [2024-10-07 05:44:25.719038] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:25:21.970 [2024-10-07 05:44:25.719353] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:25:21.970 [2024-10-07 05:44:25.725138] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:25:21.970 [2024-10-07 05:44:25.725272] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:25:21.970 [2024-10-07 05:44:25.725578] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:21.970 05:44:25 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:21.970 05:44:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:21.970 05:44:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:21.970 05:44:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:21.970 05:44:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:21.970 05:44:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:21.970 05:44:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:21.970 05:44:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:21.970 05:44:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:21.970 05:44:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:21.970 05:44:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.970 05:44:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.971 05:44:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:21.971 "name": "raid_bdev1", 00:25:21.971 "uuid": "4619c3f3-73e9-4534-8807-037e9bfaecb1", 00:25:21.971 "strip_size_kb": 64, 00:25:21.971 "state": "online", 00:25:21.971 "raid_level": "raid5f", 00:25:21.971 "superblock": false, 00:25:21.971 "num_base_bdevs": 4, 00:25:21.971 "num_base_bdevs_discovered": 4, 00:25:21.971 "num_base_bdevs_operational": 4, 00:25:21.971 "base_bdevs_list": [ 00:25:21.971 { 00:25:21.971 "name": "BaseBdev1", 00:25:21.971 "uuid": "13cd5c1a-3038-43e8-9305-14615a7ed386", 00:25:21.971 "is_configured": true, 00:25:21.971 "data_offset": 0, 00:25:21.971 "data_size": 65536 00:25:21.971 }, 00:25:21.971 { 00:25:21.971 "name": "BaseBdev2", 00:25:21.971 "uuid": "90521416-54dd-4e1d-a433-e407051ec52c", 00:25:21.971 "is_configured": true, 00:25:21.971 "data_offset": 0, 00:25:21.971 "data_size": 65536 00:25:21.971 }, 00:25:21.971 { 00:25:21.971 "name": "BaseBdev3", 00:25:21.971 "uuid": "5ca376df-c0d1-4ee4-a1d2-2c1301991008", 00:25:21.971 "is_configured": true, 00:25:21.971 "data_offset": 0, 00:25:21.971 "data_size": 65536 00:25:21.971 }, 00:25:21.971 { 00:25:21.971 "name": "BaseBdev4", 00:25:21.971 "uuid": "09bcd479-9853-4393-9b58-cd4494da77c4", 00:25:21.971 "is_configured": true, 00:25:21.971 "data_offset": 0, 00:25:21.971 "data_size": 65536 00:25:21.971 } 00:25:21.971 ] 00:25:21.971 }' 00:25:21.971 05:44:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:21.971 05:44:25 -- common/autotest_common.sh@10 -- # set +x 00:25:22.908 05:44:26 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:25:22.908 05:44:26 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:22.908 [2024-10-07 05:44:26.800845] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:22.908 05:44:26 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=196608 00:25:22.908 05:44:26 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:22.908 05:44:26 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.167 05:44:27 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:25:23.167 05:44:27 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:25:23.167 05:44:27 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:25:23.167 05:44:27 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:25:23.167 05:44:27 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:23.167 05:44:27 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:23.167 05:44:27 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:23.167 05:44:27 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:23.167 05:44:27 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:23.167 05:44:27 -- bdev/nbd_common.sh@12 -- # local i 00:25:23.167 05:44:27 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:23.167 05:44:27 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:23.167 05:44:27 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:23.425 [2024-10-07 05:44:27.240750] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:25:23.425 /dev/nbd0 00:25:23.425 05:44:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:23.426 05:44:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:23.426 05:44:27 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:23.426 05:44:27 -- common/autotest_common.sh@857 -- # local i 00:25:23.426 05:44:27 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:23.426 05:44:27 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:23.426 05:44:27 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:23.426 05:44:27 -- common/autotest_common.sh@861 -- # break 00:25:23.426 05:44:27 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:23.426 05:44:27 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:23.426 05:44:27 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:23.426 1+0 records in 00:25:23.426 1+0 records out 00:25:23.426 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508935 s, 8.0 MB/s 00:25:23.426 05:44:27 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:23.426 05:44:27 -- common/autotest_common.sh@874 -- # size=4096 00:25:23.426 05:44:27 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:23.426 05:44:27 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:23.426 05:44:27 -- common/autotest_common.sh@877 -- # return 0 00:25:23.426 05:44:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:23.426 05:44:27 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:23.426 05:44:27 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:25:23.426 05:44:27 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:25:23.426 05:44:27 -- bdev/bdev_raid.sh@582 -- # echo 192 00:25:23.426 05:44:27 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:25:23.994 512+0 records in 00:25:23.994 512+0 records out 00:25:23.994 100663296 bytes (101 MB, 96 MiB) copied, 0.474405 s, 212 MB/s 00:25:23.994 05:44:27 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:23.994 05:44:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:23.994 05:44:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:23.994 05:44:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:23.994 05:44:27 -- bdev/nbd_common.sh@51 -- # local i 00:25:23.994 05:44:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:23.994 05:44:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:24.254 05:44:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:24.254 05:44:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:24.254 05:44:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:24.254 05:44:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:24.254 05:44:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:24.254 05:44:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:24.254 [2024-10-07 05:44:28.065381] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:24.254 05:44:28 -- bdev/nbd_common.sh@41 -- # break 00:25:24.254 05:44:28 -- bdev/nbd_common.sh@45 -- # return 0 00:25:24.254 05:44:28 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:24.513 [2024-10-07 05:44:28.309500] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:24.513 05:44:28 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:24.513 05:44:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:24.513 05:44:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:24.513 05:44:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:24.513 05:44:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:24.513 05:44:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:24.513 05:44:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:24.513 05:44:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:24.513 05:44:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:24.513 05:44:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:24.513 05:44:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:24.513 05:44:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:24.770 05:44:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:24.770 "name": "raid_bdev1", 00:25:24.770 "uuid": "4619c3f3-73e9-4534-8807-037e9bfaecb1", 00:25:24.770 "strip_size_kb": 64, 00:25:24.770 "state": "online", 00:25:24.770 "raid_level": "raid5f", 00:25:24.770 "superblock": false, 00:25:24.770 "num_base_bdevs": 4, 00:25:24.770 "num_base_bdevs_discovered": 3, 00:25:24.770 "num_base_bdevs_operational": 3, 00:25:24.770 "base_bdevs_list": [ 00:25:24.770 { 00:25:24.770 "name": null, 00:25:24.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:24.770 "is_configured": false, 00:25:24.770 "data_offset": 0, 00:25:24.770 "data_size": 65536 00:25:24.770 }, 00:25:24.770 { 00:25:24.770 "name": "BaseBdev2", 00:25:24.770 "uuid": "90521416-54dd-4e1d-a433-e407051ec52c", 00:25:24.770 "is_configured": true, 00:25:24.770 "data_offset": 0, 00:25:24.770 "data_size": 65536 00:25:24.770 }, 00:25:24.770 { 00:25:24.770 "name": "BaseBdev3", 00:25:24.770 "uuid": "5ca376df-c0d1-4ee4-a1d2-2c1301991008", 00:25:24.770 "is_configured": true, 00:25:24.770 "data_offset": 0, 00:25:24.770 "data_size": 65536 00:25:24.770 }, 00:25:24.770 { 00:25:24.770 "name": "BaseBdev4", 00:25:24.770 "uuid": "09bcd479-9853-4393-9b58-cd4494da77c4", 00:25:24.770 "is_configured": true, 00:25:24.770 "data_offset": 0, 00:25:24.770 "data_size": 65536 00:25:24.770 } 00:25:24.770 ] 00:25:24.770 }' 00:25:24.770 05:44:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:24.770 05:44:28 -- common/autotest_common.sh@10 -- # set +x 00:25:25.334 05:44:29 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:25.593 [2024-10-07 05:44:29.413666] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:25.593 [2024-10-07 05:44:29.413834] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:25.593 [2024-10-07 05:44:29.424491] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:25:25.593 [2024-10-07 05:44:29.431694] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:25.593 05:44:29 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:25:26.527 05:44:30 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:26.527 05:44:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:26.527 05:44:30 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:26.527 05:44:30 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:26.527 05:44:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:26.527 05:44:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.527 05:44:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.785 05:44:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:26.785 "name": "raid_bdev1", 00:25:26.785 "uuid": "4619c3f3-73e9-4534-8807-037e9bfaecb1", 00:25:26.785 "strip_size_kb": 64, 00:25:26.785 "state": "online", 00:25:26.785 "raid_level": "raid5f", 00:25:26.785 "superblock": false, 00:25:26.785 "num_base_bdevs": 4, 00:25:26.785 "num_base_bdevs_discovered": 4, 00:25:26.785 "num_base_bdevs_operational": 4, 00:25:26.785 "process": { 00:25:26.785 "type": "rebuild", 00:25:26.785 "target": "spare", 00:25:26.785 "progress": { 00:25:26.785 "blocks": 21120, 00:25:26.785 "percent": 10 00:25:26.785 } 00:25:26.785 }, 00:25:26.785 "base_bdevs_list": [ 00:25:26.785 { 00:25:26.785 "name": "spare", 00:25:26.785 "uuid": "2704704f-7692-5086-a8d2-30d25ce8a4d5", 00:25:26.785 "is_configured": true, 00:25:26.785 "data_offset": 0, 00:25:26.785 "data_size": 65536 00:25:26.785 }, 00:25:26.785 { 00:25:26.785 "name": "BaseBdev2", 00:25:26.785 "uuid": "90521416-54dd-4e1d-a433-e407051ec52c", 00:25:26.785 "is_configured": true, 00:25:26.785 "data_offset": 0, 00:25:26.785 "data_size": 65536 00:25:26.785 }, 00:25:26.785 { 00:25:26.785 "name": "BaseBdev3", 00:25:26.785 "uuid": "5ca376df-c0d1-4ee4-a1d2-2c1301991008", 00:25:26.785 "is_configured": true, 00:25:26.785 "data_offset": 0, 00:25:26.785 "data_size": 65536 00:25:26.785 }, 00:25:26.785 { 00:25:26.785 "name": "BaseBdev4", 00:25:26.785 "uuid": "09bcd479-9853-4393-9b58-cd4494da77c4", 00:25:26.785 "is_configured": true, 00:25:26.785 "data_offset": 0, 00:25:26.785 "data_size": 65536 00:25:26.785 } 00:25:26.785 ] 00:25:26.785 }' 00:25:26.785 05:44:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:26.785 05:44:30 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:26.785 05:44:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:26.785 05:44:30 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:26.785 05:44:30 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:27.044 [2024-10-07 05:44:30.896930] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:27.044 [2024-10-07 05:44:30.942158] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:27.044 [2024-10-07 05:44:30.942382] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:27.044 05:44:30 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:27.044 05:44:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:27.044 05:44:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:27.044 05:44:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:27.044 05:44:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:27.044 05:44:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:27.044 05:44:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:27.044 05:44:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:27.044 05:44:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:27.044 05:44:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:27.044 05:44:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.044 05:44:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.301 05:44:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:27.301 "name": "raid_bdev1", 00:25:27.301 "uuid": "4619c3f3-73e9-4534-8807-037e9bfaecb1", 00:25:27.301 "strip_size_kb": 64, 00:25:27.301 "state": "online", 00:25:27.301 "raid_level": "raid5f", 00:25:27.301 "superblock": false, 00:25:27.301 "num_base_bdevs": 4, 00:25:27.301 "num_base_bdevs_discovered": 3, 00:25:27.301 "num_base_bdevs_operational": 3, 00:25:27.301 "base_bdevs_list": [ 00:25:27.301 { 00:25:27.301 "name": null, 00:25:27.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:27.301 "is_configured": false, 00:25:27.301 "data_offset": 0, 00:25:27.301 "data_size": 65536 00:25:27.301 }, 00:25:27.301 { 00:25:27.301 "name": "BaseBdev2", 00:25:27.301 "uuid": "90521416-54dd-4e1d-a433-e407051ec52c", 00:25:27.301 "is_configured": true, 00:25:27.301 "data_offset": 0, 00:25:27.301 "data_size": 65536 00:25:27.301 }, 00:25:27.301 { 00:25:27.301 "name": "BaseBdev3", 00:25:27.301 "uuid": "5ca376df-c0d1-4ee4-a1d2-2c1301991008", 00:25:27.301 "is_configured": true, 00:25:27.301 "data_offset": 0, 00:25:27.301 "data_size": 65536 00:25:27.301 }, 00:25:27.301 { 00:25:27.301 "name": "BaseBdev4", 00:25:27.301 "uuid": "09bcd479-9853-4393-9b58-cd4494da77c4", 00:25:27.301 "is_configured": true, 00:25:27.301 "data_offset": 0, 00:25:27.301 "data_size": 65536 00:25:27.301 } 00:25:27.301 ] 00:25:27.301 }' 00:25:27.301 05:44:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:27.301 05:44:31 -- common/autotest_common.sh@10 -- # set +x 00:25:28.237 05:44:31 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:28.237 05:44:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:28.237 05:44:31 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:28.237 05:44:31 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:28.237 05:44:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:28.237 05:44:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.237 05:44:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:28.237 05:44:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:28.237 "name": "raid_bdev1", 00:25:28.237 "uuid": "4619c3f3-73e9-4534-8807-037e9bfaecb1", 00:25:28.237 "strip_size_kb": 64, 00:25:28.237 "state": "online", 00:25:28.237 "raid_level": "raid5f", 00:25:28.237 "superblock": false, 00:25:28.237 "num_base_bdevs": 4, 00:25:28.237 "num_base_bdevs_discovered": 3, 00:25:28.237 "num_base_bdevs_operational": 3, 00:25:28.237 "base_bdevs_list": [ 00:25:28.237 { 00:25:28.237 "name": null, 00:25:28.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.237 "is_configured": false, 00:25:28.237 "data_offset": 0, 00:25:28.237 "data_size": 65536 00:25:28.237 }, 00:25:28.237 { 00:25:28.237 "name": "BaseBdev2", 00:25:28.237 "uuid": "90521416-54dd-4e1d-a433-e407051ec52c", 00:25:28.237 "is_configured": true, 00:25:28.237 "data_offset": 0, 00:25:28.237 "data_size": 65536 00:25:28.237 }, 00:25:28.237 { 00:25:28.237 "name": "BaseBdev3", 00:25:28.237 "uuid": "5ca376df-c0d1-4ee4-a1d2-2c1301991008", 00:25:28.237 "is_configured": true, 00:25:28.237 "data_offset": 0, 00:25:28.237 "data_size": 65536 00:25:28.237 }, 00:25:28.237 { 00:25:28.237 "name": "BaseBdev4", 00:25:28.237 "uuid": "09bcd479-9853-4393-9b58-cd4494da77c4", 00:25:28.237 "is_configured": true, 00:25:28.237 "data_offset": 0, 00:25:28.237 "data_size": 65536 00:25:28.237 } 00:25:28.237 ] 00:25:28.237 }' 00:25:28.237 05:44:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:28.237 05:44:32 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:28.237 05:44:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:28.495 05:44:32 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:28.495 05:44:32 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:28.754 [2024-10-07 05:44:32.485649] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:28.754 [2024-10-07 05:44:32.485840] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:28.754 [2024-10-07 05:44:32.495965] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:25:28.754 [2024-10-07 05:44:32.503323] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:28.754 05:44:32 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:25:29.688 05:44:33 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:29.688 05:44:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:29.688 05:44:33 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:29.688 05:44:33 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:29.688 05:44:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:29.688 05:44:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.688 05:44:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:29.947 05:44:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:29.947 "name": "raid_bdev1", 00:25:29.947 "uuid": "4619c3f3-73e9-4534-8807-037e9bfaecb1", 00:25:29.947 "strip_size_kb": 64, 00:25:29.947 "state": "online", 00:25:29.947 "raid_level": "raid5f", 00:25:29.947 "superblock": false, 00:25:29.947 "num_base_bdevs": 4, 00:25:29.947 "num_base_bdevs_discovered": 4, 00:25:29.947 "num_base_bdevs_operational": 4, 00:25:29.947 "process": { 00:25:29.947 "type": "rebuild", 00:25:29.947 "target": "spare", 00:25:29.947 "progress": { 00:25:29.947 "blocks": 23040, 00:25:29.947 "percent": 11 00:25:29.947 } 00:25:29.947 }, 00:25:29.947 "base_bdevs_list": [ 00:25:29.947 { 00:25:29.947 "name": "spare", 00:25:29.947 "uuid": "2704704f-7692-5086-a8d2-30d25ce8a4d5", 00:25:29.947 "is_configured": true, 00:25:29.947 "data_offset": 0, 00:25:29.947 "data_size": 65536 00:25:29.947 }, 00:25:29.947 { 00:25:29.947 "name": "BaseBdev2", 00:25:29.947 "uuid": "90521416-54dd-4e1d-a433-e407051ec52c", 00:25:29.947 "is_configured": true, 00:25:29.947 "data_offset": 0, 00:25:29.947 "data_size": 65536 00:25:29.947 }, 00:25:29.947 { 00:25:29.947 "name": "BaseBdev3", 00:25:29.947 "uuid": "5ca376df-c0d1-4ee4-a1d2-2c1301991008", 00:25:29.947 "is_configured": true, 00:25:29.947 "data_offset": 0, 00:25:29.947 "data_size": 65536 00:25:29.947 }, 00:25:29.947 { 00:25:29.947 "name": "BaseBdev4", 00:25:29.947 "uuid": "09bcd479-9853-4393-9b58-cd4494da77c4", 00:25:29.947 "is_configured": true, 00:25:29.947 "data_offset": 0, 00:25:29.947 "data_size": 65536 00:25:29.947 } 00:25:29.947 ] 00:25:29.947 }' 00:25:29.947 05:44:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:29.947 05:44:33 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:29.947 05:44:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:29.947 05:44:33 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:29.947 05:44:33 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:25:29.947 05:44:33 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:25:29.947 05:44:33 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:25:29.947 05:44:33 -- bdev/bdev_raid.sh@657 -- # local timeout=713 00:25:29.947 05:44:33 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:29.947 05:44:33 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:29.947 05:44:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:29.947 05:44:33 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:29.947 05:44:33 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:29.947 05:44:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:29.947 05:44:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.947 05:44:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:30.206 05:44:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:30.206 "name": "raid_bdev1", 00:25:30.206 "uuid": "4619c3f3-73e9-4534-8807-037e9bfaecb1", 00:25:30.206 "strip_size_kb": 64, 00:25:30.206 "state": "online", 00:25:30.206 "raid_level": "raid5f", 00:25:30.206 "superblock": false, 00:25:30.206 "num_base_bdevs": 4, 00:25:30.206 "num_base_bdevs_discovered": 4, 00:25:30.206 "num_base_bdevs_operational": 4, 00:25:30.206 "process": { 00:25:30.206 "type": "rebuild", 00:25:30.206 "target": "spare", 00:25:30.206 "progress": { 00:25:30.206 "blocks": 28800, 00:25:30.206 "percent": 14 00:25:30.206 } 00:25:30.206 }, 00:25:30.206 "base_bdevs_list": [ 00:25:30.206 { 00:25:30.206 "name": "spare", 00:25:30.206 "uuid": "2704704f-7692-5086-a8d2-30d25ce8a4d5", 00:25:30.206 "is_configured": true, 00:25:30.206 "data_offset": 0, 00:25:30.206 "data_size": 65536 00:25:30.206 }, 00:25:30.206 { 00:25:30.206 "name": "BaseBdev2", 00:25:30.206 "uuid": "90521416-54dd-4e1d-a433-e407051ec52c", 00:25:30.206 "is_configured": true, 00:25:30.206 "data_offset": 0, 00:25:30.206 "data_size": 65536 00:25:30.206 }, 00:25:30.206 { 00:25:30.206 "name": "BaseBdev3", 00:25:30.206 "uuid": "5ca376df-c0d1-4ee4-a1d2-2c1301991008", 00:25:30.206 "is_configured": true, 00:25:30.206 "data_offset": 0, 00:25:30.206 "data_size": 65536 00:25:30.206 }, 00:25:30.206 { 00:25:30.206 "name": "BaseBdev4", 00:25:30.206 "uuid": "09bcd479-9853-4393-9b58-cd4494da77c4", 00:25:30.206 "is_configured": true, 00:25:30.206 "data_offset": 0, 00:25:30.206 "data_size": 65536 00:25:30.206 } 00:25:30.206 ] 00:25:30.206 }' 00:25:30.206 05:44:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:30.206 05:44:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:30.206 05:44:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:30.206 05:44:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:30.206 05:44:34 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:31.584 05:44:35 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:31.584 05:44:35 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:31.584 05:44:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:31.584 05:44:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:31.584 05:44:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:31.584 05:44:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:31.584 05:44:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:31.584 05:44:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:31.584 05:44:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:31.584 "name": "raid_bdev1", 00:25:31.584 "uuid": "4619c3f3-73e9-4534-8807-037e9bfaecb1", 00:25:31.584 "strip_size_kb": 64, 00:25:31.584 "state": "online", 00:25:31.584 "raid_level": "raid5f", 00:25:31.584 "superblock": false, 00:25:31.584 "num_base_bdevs": 4, 00:25:31.584 "num_base_bdevs_discovered": 4, 00:25:31.584 "num_base_bdevs_operational": 4, 00:25:31.584 "process": { 00:25:31.584 "type": "rebuild", 00:25:31.584 "target": "spare", 00:25:31.584 "progress": { 00:25:31.584 "blocks": 53760, 00:25:31.584 "percent": 27 00:25:31.584 } 00:25:31.584 }, 00:25:31.584 "base_bdevs_list": [ 00:25:31.584 { 00:25:31.584 "name": "spare", 00:25:31.584 "uuid": "2704704f-7692-5086-a8d2-30d25ce8a4d5", 00:25:31.584 "is_configured": true, 00:25:31.584 "data_offset": 0, 00:25:31.584 "data_size": 65536 00:25:31.584 }, 00:25:31.584 { 00:25:31.584 "name": "BaseBdev2", 00:25:31.584 "uuid": "90521416-54dd-4e1d-a433-e407051ec52c", 00:25:31.584 "is_configured": true, 00:25:31.584 "data_offset": 0, 00:25:31.584 "data_size": 65536 00:25:31.584 }, 00:25:31.584 { 00:25:31.584 "name": "BaseBdev3", 00:25:31.584 "uuid": "5ca376df-c0d1-4ee4-a1d2-2c1301991008", 00:25:31.584 "is_configured": true, 00:25:31.584 "data_offset": 0, 00:25:31.584 "data_size": 65536 00:25:31.584 }, 00:25:31.584 { 00:25:31.584 "name": "BaseBdev4", 00:25:31.584 "uuid": "09bcd479-9853-4393-9b58-cd4494da77c4", 00:25:31.584 "is_configured": true, 00:25:31.584 "data_offset": 0, 00:25:31.584 "data_size": 65536 00:25:31.584 } 00:25:31.584 ] 00:25:31.584 }' 00:25:31.584 05:44:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:31.584 05:44:35 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:31.584 05:44:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:31.584 05:44:35 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:31.584 05:44:35 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:32.962 05:44:36 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:32.962 05:44:36 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:32.962 05:44:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:32.962 05:44:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:32.962 05:44:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:32.962 05:44:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:32.962 05:44:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:32.962 05:44:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:32.962 05:44:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:32.962 "name": "raid_bdev1", 00:25:32.962 "uuid": "4619c3f3-73e9-4534-8807-037e9bfaecb1", 00:25:32.962 "strip_size_kb": 64, 00:25:32.962 "state": "online", 00:25:32.962 "raid_level": "raid5f", 00:25:32.962 "superblock": false, 00:25:32.962 "num_base_bdevs": 4, 00:25:32.962 "num_base_bdevs_discovered": 4, 00:25:32.962 "num_base_bdevs_operational": 4, 00:25:32.962 "process": { 00:25:32.962 "type": "rebuild", 00:25:32.962 "target": "spare", 00:25:32.962 "progress": { 00:25:32.962 "blocks": 78720, 00:25:32.962 "percent": 40 00:25:32.962 } 00:25:32.962 }, 00:25:32.962 "base_bdevs_list": [ 00:25:32.962 { 00:25:32.962 "name": "spare", 00:25:32.962 "uuid": "2704704f-7692-5086-a8d2-30d25ce8a4d5", 00:25:32.962 "is_configured": true, 00:25:32.962 "data_offset": 0, 00:25:32.962 "data_size": 65536 00:25:32.962 }, 00:25:32.962 { 00:25:32.962 "name": "BaseBdev2", 00:25:32.962 "uuid": "90521416-54dd-4e1d-a433-e407051ec52c", 00:25:32.962 "is_configured": true, 00:25:32.962 "data_offset": 0, 00:25:32.962 "data_size": 65536 00:25:32.962 }, 00:25:32.962 { 00:25:32.962 "name": "BaseBdev3", 00:25:32.962 "uuid": "5ca376df-c0d1-4ee4-a1d2-2c1301991008", 00:25:32.962 "is_configured": true, 00:25:32.962 "data_offset": 0, 00:25:32.962 "data_size": 65536 00:25:32.962 }, 00:25:32.962 { 00:25:32.962 "name": "BaseBdev4", 00:25:32.962 "uuid": "09bcd479-9853-4393-9b58-cd4494da77c4", 00:25:32.962 "is_configured": true, 00:25:32.962 "data_offset": 0, 00:25:32.962 "data_size": 65536 00:25:32.962 } 00:25:32.962 ] 00:25:32.962 }' 00:25:32.962 05:44:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:32.962 05:44:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:32.962 05:44:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:32.962 05:44:36 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:32.962 05:44:36 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:33.937 05:44:37 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:33.937 05:44:37 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:33.937 05:44:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:33.937 05:44:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:33.937 05:44:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:33.937 05:44:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:33.937 05:44:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.937 05:44:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:34.195 05:44:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:34.195 "name": "raid_bdev1", 00:25:34.195 "uuid": "4619c3f3-73e9-4534-8807-037e9bfaecb1", 00:25:34.195 "strip_size_kb": 64, 00:25:34.195 "state": "online", 00:25:34.195 "raid_level": "raid5f", 00:25:34.195 "superblock": false, 00:25:34.195 "num_base_bdevs": 4, 00:25:34.195 "num_base_bdevs_discovered": 4, 00:25:34.195 "num_base_bdevs_operational": 4, 00:25:34.195 "process": { 00:25:34.195 "type": "rebuild", 00:25:34.195 "target": "spare", 00:25:34.195 "progress": { 00:25:34.195 "blocks": 105600, 00:25:34.195 "percent": 53 00:25:34.195 } 00:25:34.195 }, 00:25:34.195 "base_bdevs_list": [ 00:25:34.195 { 00:25:34.195 "name": "spare", 00:25:34.195 "uuid": "2704704f-7692-5086-a8d2-30d25ce8a4d5", 00:25:34.195 "is_configured": true, 00:25:34.195 "data_offset": 0, 00:25:34.195 "data_size": 65536 00:25:34.195 }, 00:25:34.195 { 00:25:34.195 "name": "BaseBdev2", 00:25:34.195 "uuid": "90521416-54dd-4e1d-a433-e407051ec52c", 00:25:34.195 "is_configured": true, 00:25:34.195 "data_offset": 0, 00:25:34.195 "data_size": 65536 00:25:34.195 }, 00:25:34.195 { 00:25:34.195 "name": "BaseBdev3", 00:25:34.195 "uuid": "5ca376df-c0d1-4ee4-a1d2-2c1301991008", 00:25:34.195 "is_configured": true, 00:25:34.195 "data_offset": 0, 00:25:34.195 "data_size": 65536 00:25:34.195 }, 00:25:34.195 { 00:25:34.195 "name": "BaseBdev4", 00:25:34.195 "uuid": "09bcd479-9853-4393-9b58-cd4494da77c4", 00:25:34.195 "is_configured": true, 00:25:34.195 "data_offset": 0, 00:25:34.195 "data_size": 65536 00:25:34.195 } 00:25:34.195 ] 00:25:34.196 }' 00:25:34.196 05:44:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:34.196 05:44:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:34.196 05:44:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:34.196 05:44:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:34.196 05:44:38 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:35.572 05:44:39 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:35.572 05:44:39 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:35.572 05:44:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:35.572 05:44:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:35.572 05:44:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:35.572 05:44:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:35.572 05:44:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.572 05:44:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.572 05:44:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:35.572 "name": "raid_bdev1", 00:25:35.572 "uuid": "4619c3f3-73e9-4534-8807-037e9bfaecb1", 00:25:35.572 "strip_size_kb": 64, 00:25:35.572 "state": "online", 00:25:35.572 "raid_level": "raid5f", 00:25:35.572 "superblock": false, 00:25:35.572 "num_base_bdevs": 4, 00:25:35.572 "num_base_bdevs_discovered": 4, 00:25:35.572 "num_base_bdevs_operational": 4, 00:25:35.572 "process": { 00:25:35.572 "type": "rebuild", 00:25:35.572 "target": "spare", 00:25:35.572 "progress": { 00:25:35.572 "blocks": 130560, 00:25:35.572 "percent": 66 00:25:35.572 } 00:25:35.572 }, 00:25:35.572 "base_bdevs_list": [ 00:25:35.572 { 00:25:35.572 "name": "spare", 00:25:35.572 "uuid": "2704704f-7692-5086-a8d2-30d25ce8a4d5", 00:25:35.572 "is_configured": true, 00:25:35.572 "data_offset": 0, 00:25:35.572 "data_size": 65536 00:25:35.572 }, 00:25:35.572 { 00:25:35.572 "name": "BaseBdev2", 00:25:35.572 "uuid": "90521416-54dd-4e1d-a433-e407051ec52c", 00:25:35.572 "is_configured": true, 00:25:35.572 "data_offset": 0, 00:25:35.572 "data_size": 65536 00:25:35.572 }, 00:25:35.572 { 00:25:35.572 "name": "BaseBdev3", 00:25:35.572 "uuid": "5ca376df-c0d1-4ee4-a1d2-2c1301991008", 00:25:35.572 "is_configured": true, 00:25:35.572 "data_offset": 0, 00:25:35.572 "data_size": 65536 00:25:35.572 }, 00:25:35.572 { 00:25:35.572 "name": "BaseBdev4", 00:25:35.572 "uuid": "09bcd479-9853-4393-9b58-cd4494da77c4", 00:25:35.572 "is_configured": true, 00:25:35.572 "data_offset": 0, 00:25:35.572 "data_size": 65536 00:25:35.572 } 00:25:35.572 ] 00:25:35.572 }' 00:25:35.572 05:44:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:35.572 05:44:39 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:35.572 05:44:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:35.572 05:44:39 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:35.572 05:44:39 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:36.950 05:44:40 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:36.950 05:44:40 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:36.950 05:44:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:36.950 05:44:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:36.950 05:44:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:36.950 05:44:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:36.950 05:44:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.950 05:44:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.950 05:44:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:36.950 "name": "raid_bdev1", 00:25:36.950 "uuid": "4619c3f3-73e9-4534-8807-037e9bfaecb1", 00:25:36.950 "strip_size_kb": 64, 00:25:36.950 "state": "online", 00:25:36.950 "raid_level": "raid5f", 00:25:36.950 "superblock": false, 00:25:36.950 "num_base_bdevs": 4, 00:25:36.950 "num_base_bdevs_discovered": 4, 00:25:36.950 "num_base_bdevs_operational": 4, 00:25:36.950 "process": { 00:25:36.950 "type": "rebuild", 00:25:36.950 "target": "spare", 00:25:36.950 "progress": { 00:25:36.950 "blocks": 155520, 00:25:36.950 "percent": 79 00:25:36.950 } 00:25:36.950 }, 00:25:36.950 "base_bdevs_list": [ 00:25:36.950 { 00:25:36.950 "name": "spare", 00:25:36.950 "uuid": "2704704f-7692-5086-a8d2-30d25ce8a4d5", 00:25:36.950 "is_configured": true, 00:25:36.950 "data_offset": 0, 00:25:36.950 "data_size": 65536 00:25:36.950 }, 00:25:36.950 { 00:25:36.950 "name": "BaseBdev2", 00:25:36.950 "uuid": "90521416-54dd-4e1d-a433-e407051ec52c", 00:25:36.950 "is_configured": true, 00:25:36.950 "data_offset": 0, 00:25:36.950 "data_size": 65536 00:25:36.950 }, 00:25:36.950 { 00:25:36.950 "name": "BaseBdev3", 00:25:36.950 "uuid": "5ca376df-c0d1-4ee4-a1d2-2c1301991008", 00:25:36.950 "is_configured": true, 00:25:36.950 "data_offset": 0, 00:25:36.950 "data_size": 65536 00:25:36.950 }, 00:25:36.950 { 00:25:36.950 "name": "BaseBdev4", 00:25:36.950 "uuid": "09bcd479-9853-4393-9b58-cd4494da77c4", 00:25:36.950 "is_configured": true, 00:25:36.950 "data_offset": 0, 00:25:36.950 "data_size": 65536 00:25:36.950 } 00:25:36.950 ] 00:25:36.950 }' 00:25:36.950 05:44:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:36.950 05:44:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:36.950 05:44:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:36.950 05:44:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:36.950 05:44:40 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:37.888 05:44:41 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:37.888 05:44:41 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:37.888 05:44:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:37.888 05:44:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:37.888 05:44:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:37.888 05:44:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:37.888 05:44:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.888 05:44:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:38.147 05:44:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:38.147 "name": "raid_bdev1", 00:25:38.147 "uuid": "4619c3f3-73e9-4534-8807-037e9bfaecb1", 00:25:38.147 "strip_size_kb": 64, 00:25:38.147 "state": "online", 00:25:38.147 "raid_level": "raid5f", 00:25:38.147 "superblock": false, 00:25:38.147 "num_base_bdevs": 4, 00:25:38.147 "num_base_bdevs_discovered": 4, 00:25:38.147 "num_base_bdevs_operational": 4, 00:25:38.147 "process": { 00:25:38.147 "type": "rebuild", 00:25:38.147 "target": "spare", 00:25:38.147 "progress": { 00:25:38.147 "blocks": 182400, 00:25:38.147 "percent": 92 00:25:38.147 } 00:25:38.147 }, 00:25:38.147 "base_bdevs_list": [ 00:25:38.147 { 00:25:38.147 "name": "spare", 00:25:38.147 "uuid": "2704704f-7692-5086-a8d2-30d25ce8a4d5", 00:25:38.147 "is_configured": true, 00:25:38.147 "data_offset": 0, 00:25:38.147 "data_size": 65536 00:25:38.147 }, 00:25:38.147 { 00:25:38.147 "name": "BaseBdev2", 00:25:38.147 "uuid": "90521416-54dd-4e1d-a433-e407051ec52c", 00:25:38.147 "is_configured": true, 00:25:38.147 "data_offset": 0, 00:25:38.147 "data_size": 65536 00:25:38.147 }, 00:25:38.147 { 00:25:38.147 "name": "BaseBdev3", 00:25:38.147 "uuid": "5ca376df-c0d1-4ee4-a1d2-2c1301991008", 00:25:38.147 "is_configured": true, 00:25:38.147 "data_offset": 0, 00:25:38.147 "data_size": 65536 00:25:38.147 }, 00:25:38.147 { 00:25:38.147 "name": "BaseBdev4", 00:25:38.147 "uuid": "09bcd479-9853-4393-9b58-cd4494da77c4", 00:25:38.147 "is_configured": true, 00:25:38.147 "data_offset": 0, 00:25:38.147 "data_size": 65536 00:25:38.147 } 00:25:38.147 ] 00:25:38.147 }' 00:25:38.147 05:44:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:38.406 05:44:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:38.406 05:44:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:38.406 05:44:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:38.406 05:44:42 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:38.974 [2024-10-07 05:44:42.873369] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:38.974 [2024-10-07 05:44:42.873577] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:38.974 [2024-10-07 05:44:42.873762] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:39.234 05:44:43 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:39.234 05:44:43 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:39.234 05:44:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:39.234 05:44:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:39.234 05:44:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:39.234 05:44:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:39.234 05:44:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.234 05:44:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:39.494 05:44:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:39.494 "name": "raid_bdev1", 00:25:39.494 "uuid": "4619c3f3-73e9-4534-8807-037e9bfaecb1", 00:25:39.494 "strip_size_kb": 64, 00:25:39.494 "state": "online", 00:25:39.494 "raid_level": "raid5f", 00:25:39.494 "superblock": false, 00:25:39.494 "num_base_bdevs": 4, 00:25:39.494 "num_base_bdevs_discovered": 4, 00:25:39.494 "num_base_bdevs_operational": 4, 00:25:39.494 "base_bdevs_list": [ 00:25:39.494 { 00:25:39.494 "name": "spare", 00:25:39.494 "uuid": "2704704f-7692-5086-a8d2-30d25ce8a4d5", 00:25:39.494 "is_configured": true, 00:25:39.494 "data_offset": 0, 00:25:39.494 "data_size": 65536 00:25:39.494 }, 00:25:39.494 { 00:25:39.494 "name": "BaseBdev2", 00:25:39.494 "uuid": "90521416-54dd-4e1d-a433-e407051ec52c", 00:25:39.494 "is_configured": true, 00:25:39.494 "data_offset": 0, 00:25:39.494 "data_size": 65536 00:25:39.494 }, 00:25:39.494 { 00:25:39.494 "name": "BaseBdev3", 00:25:39.494 "uuid": "5ca376df-c0d1-4ee4-a1d2-2c1301991008", 00:25:39.494 "is_configured": true, 00:25:39.494 "data_offset": 0, 00:25:39.494 "data_size": 65536 00:25:39.494 }, 00:25:39.494 { 00:25:39.494 "name": "BaseBdev4", 00:25:39.494 "uuid": "09bcd479-9853-4393-9b58-cd4494da77c4", 00:25:39.494 "is_configured": true, 00:25:39.494 "data_offset": 0, 00:25:39.494 "data_size": 65536 00:25:39.494 } 00:25:39.494 ] 00:25:39.494 }' 00:25:39.494 05:44:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:39.754 05:44:43 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:39.754 05:44:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:39.754 05:44:43 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:25:39.754 05:44:43 -- bdev/bdev_raid.sh@660 -- # break 00:25:39.754 05:44:43 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:39.754 05:44:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:39.754 05:44:43 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:39.754 05:44:43 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:39.754 05:44:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:39.754 05:44:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.754 05:44:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:40.012 05:44:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:40.012 "name": "raid_bdev1", 00:25:40.012 "uuid": "4619c3f3-73e9-4534-8807-037e9bfaecb1", 00:25:40.012 "strip_size_kb": 64, 00:25:40.012 "state": "online", 00:25:40.012 "raid_level": "raid5f", 00:25:40.012 "superblock": false, 00:25:40.012 "num_base_bdevs": 4, 00:25:40.012 "num_base_bdevs_discovered": 4, 00:25:40.012 "num_base_bdevs_operational": 4, 00:25:40.012 "base_bdevs_list": [ 00:25:40.012 { 00:25:40.012 "name": "spare", 00:25:40.012 "uuid": "2704704f-7692-5086-a8d2-30d25ce8a4d5", 00:25:40.012 "is_configured": true, 00:25:40.012 "data_offset": 0, 00:25:40.012 "data_size": 65536 00:25:40.012 }, 00:25:40.012 { 00:25:40.012 "name": "BaseBdev2", 00:25:40.012 "uuid": "90521416-54dd-4e1d-a433-e407051ec52c", 00:25:40.012 "is_configured": true, 00:25:40.012 "data_offset": 0, 00:25:40.012 "data_size": 65536 00:25:40.012 }, 00:25:40.012 { 00:25:40.012 "name": "BaseBdev3", 00:25:40.012 "uuid": "5ca376df-c0d1-4ee4-a1d2-2c1301991008", 00:25:40.012 "is_configured": true, 00:25:40.012 "data_offset": 0, 00:25:40.012 "data_size": 65536 00:25:40.012 }, 00:25:40.012 { 00:25:40.012 "name": "BaseBdev4", 00:25:40.012 "uuid": "09bcd479-9853-4393-9b58-cd4494da77c4", 00:25:40.012 "is_configured": true, 00:25:40.012 "data_offset": 0, 00:25:40.012 "data_size": 65536 00:25:40.012 } 00:25:40.012 ] 00:25:40.012 }' 00:25:40.012 05:44:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:40.012 05:44:43 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:40.012 05:44:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:40.012 05:44:43 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:40.012 05:44:43 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:40.012 05:44:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:40.012 05:44:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:40.012 05:44:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:40.012 05:44:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:40.012 05:44:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:40.012 05:44:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:40.012 05:44:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:40.012 05:44:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:40.012 05:44:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:40.012 05:44:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:40.012 05:44:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.271 05:44:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:40.271 "name": "raid_bdev1", 00:25:40.271 "uuid": "4619c3f3-73e9-4534-8807-037e9bfaecb1", 00:25:40.271 "strip_size_kb": 64, 00:25:40.271 "state": "online", 00:25:40.271 "raid_level": "raid5f", 00:25:40.271 "superblock": false, 00:25:40.271 "num_base_bdevs": 4, 00:25:40.271 "num_base_bdevs_discovered": 4, 00:25:40.271 "num_base_bdevs_operational": 4, 00:25:40.271 "base_bdevs_list": [ 00:25:40.271 { 00:25:40.271 "name": "spare", 00:25:40.271 "uuid": "2704704f-7692-5086-a8d2-30d25ce8a4d5", 00:25:40.271 "is_configured": true, 00:25:40.271 "data_offset": 0, 00:25:40.271 "data_size": 65536 00:25:40.271 }, 00:25:40.271 { 00:25:40.271 "name": "BaseBdev2", 00:25:40.271 "uuid": "90521416-54dd-4e1d-a433-e407051ec52c", 00:25:40.271 "is_configured": true, 00:25:40.271 "data_offset": 0, 00:25:40.271 "data_size": 65536 00:25:40.271 }, 00:25:40.271 { 00:25:40.271 "name": "BaseBdev3", 00:25:40.271 "uuid": "5ca376df-c0d1-4ee4-a1d2-2c1301991008", 00:25:40.271 "is_configured": true, 00:25:40.271 "data_offset": 0, 00:25:40.271 "data_size": 65536 00:25:40.271 }, 00:25:40.271 { 00:25:40.271 "name": "BaseBdev4", 00:25:40.271 "uuid": "09bcd479-9853-4393-9b58-cd4494da77c4", 00:25:40.271 "is_configured": true, 00:25:40.271 "data_offset": 0, 00:25:40.271 "data_size": 65536 00:25:40.271 } 00:25:40.271 ] 00:25:40.271 }' 00:25:40.271 05:44:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:40.271 05:44:44 -- common/autotest_common.sh@10 -- # set +x 00:25:40.838 05:44:44 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:41.096 [2024-10-07 05:44:44.999121] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:41.096 [2024-10-07 05:44:44.999277] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:41.096 [2024-10-07 05:44:44.999484] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:41.096 [2024-10-07 05:44:44.999734] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:41.096 [2024-10-07 05:44:44.999855] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:25:41.096 05:44:45 -- bdev/bdev_raid.sh@671 -- # jq length 00:25:41.096 05:44:45 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.356 05:44:45 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:25:41.356 05:44:45 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:25:41.356 05:44:45 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:41.356 05:44:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:41.356 05:44:45 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:41.356 05:44:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:41.356 05:44:45 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:41.356 05:44:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:41.356 05:44:45 -- bdev/nbd_common.sh@12 -- # local i 00:25:41.356 05:44:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:41.356 05:44:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:41.356 05:44:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:41.614 /dev/nbd0 00:25:41.614 05:44:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:41.614 05:44:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:41.614 05:44:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:41.614 05:44:45 -- common/autotest_common.sh@857 -- # local i 00:25:41.614 05:44:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:41.614 05:44:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:41.614 05:44:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:41.614 05:44:45 -- common/autotest_common.sh@861 -- # break 00:25:41.614 05:44:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:41.614 05:44:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:41.614 05:44:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:41.614 1+0 records in 00:25:41.614 1+0 records out 00:25:41.614 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043699 s, 9.4 MB/s 00:25:41.614 05:44:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:41.614 05:44:45 -- common/autotest_common.sh@874 -- # size=4096 00:25:41.614 05:44:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:41.614 05:44:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:41.614 05:44:45 -- common/autotest_common.sh@877 -- # return 0 00:25:41.614 05:44:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:41.614 05:44:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:41.615 05:44:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:41.875 /dev/nbd1 00:25:41.875 05:44:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:41.875 05:44:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:41.875 05:44:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:25:41.875 05:44:45 -- common/autotest_common.sh@857 -- # local i 00:25:41.875 05:44:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:41.875 05:44:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:41.875 05:44:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:25:41.875 05:44:45 -- common/autotest_common.sh@861 -- # break 00:25:41.875 05:44:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:41.875 05:44:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:41.875 05:44:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:41.875 1+0 records in 00:25:41.875 1+0 records out 00:25:41.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000539835 s, 7.6 MB/s 00:25:41.875 05:44:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:41.875 05:44:45 -- common/autotest_common.sh@874 -- # size=4096 00:25:41.875 05:44:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:41.875 05:44:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:41.875 05:44:45 -- common/autotest_common.sh@877 -- # return 0 00:25:41.875 05:44:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:41.875 05:44:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:41.875 05:44:45 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:42.134 05:44:45 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:42.134 05:44:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:42.134 05:44:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:42.134 05:44:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:42.134 05:44:45 -- bdev/nbd_common.sh@51 -- # local i 00:25:42.134 05:44:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:42.134 05:44:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:42.393 05:44:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:42.393 05:44:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:42.393 05:44:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:42.393 05:44:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:42.393 05:44:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:42.393 05:44:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:42.393 05:44:46 -- bdev/nbd_common.sh@41 -- # break 00:25:42.393 05:44:46 -- bdev/nbd_common.sh@45 -- # return 0 00:25:42.393 05:44:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:42.393 05:44:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:42.653 05:44:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:42.653 05:44:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:42.653 05:44:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:42.653 05:44:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:42.653 05:44:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:42.653 05:44:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:42.653 05:44:46 -- bdev/nbd_common.sh@41 -- # break 00:25:42.653 05:44:46 -- bdev/nbd_common.sh@45 -- # return 0 00:25:42.653 05:44:46 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:25:42.653 05:44:46 -- bdev/bdev_raid.sh@709 -- # killprocess 174706 00:25:42.653 05:44:46 -- common/autotest_common.sh@926 -- # '[' -z 174706 ']' 00:25:42.653 05:44:46 -- common/autotest_common.sh@930 -- # kill -0 174706 00:25:42.653 05:44:46 -- common/autotest_common.sh@931 -- # uname 00:25:42.653 05:44:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:42.653 05:44:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 174706 00:25:42.653 05:44:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:42.653 05:44:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:42.653 05:44:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 174706' 00:25:42.653 killing process with pid 174706 00:25:42.653 05:44:46 -- common/autotest_common.sh@945 -- # kill 174706 00:25:42.653 Received shutdown signal, test time was about 60.000000 seconds 00:25:42.653 00:25:42.653 Latency(us) 00:25:42.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.653 =================================================================================================================== 00:25:42.653 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:42.653 [2024-10-07 05:44:46.502527] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:42.653 05:44:46 -- common/autotest_common.sh@950 -- # wait 174706 00:25:42.912 [2024-10-07 05:44:46.835559] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:44.291 ************************************ 00:25:44.291 END TEST raid5f_rebuild_test 00:25:44.291 ************************************ 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@711 -- # return 0 00:25:44.291 00:25:44.291 real 0m25.075s 00:25:44.291 user 0m36.471s 00:25:44.291 sys 0m2.701s 00:25:44.291 05:44:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:44.291 05:44:47 -- common/autotest_common.sh@10 -- # set +x 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false 00:25:44.291 05:44:47 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:25:44.291 05:44:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:44.291 05:44:47 -- common/autotest_common.sh@10 -- # set +x 00:25:44.291 ************************************ 00:25:44.291 START TEST raid5f_rebuild_test_sb 00:25:44.291 ************************************ 00:25:44.291 05:44:47 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 4 true false 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@544 -- # raid_pid=175329 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@545 -- # waitforlisten 175329 /var/tmp/spdk-raid.sock 00:25:44.291 05:44:47 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:44.291 05:44:47 -- common/autotest_common.sh@819 -- # '[' -z 175329 ']' 00:25:44.291 05:44:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:44.291 05:44:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:44.291 05:44:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:44.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:44.291 05:44:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:44.291 05:44:47 -- common/autotest_common.sh@10 -- # set +x 00:25:44.291 [2024-10-07 05:44:48.010049] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:25:44.291 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:44.291 Zero copy mechanism will not be used. 00:25:44.291 [2024-10-07 05:44:48.010238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175329 ] 00:25:44.291 [2024-10-07 05:44:48.180565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.557 [2024-10-07 05:44:48.364714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.817 [2024-10-07 05:44:48.551937] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:45.075 05:44:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:45.075 05:44:48 -- common/autotest_common.sh@852 -- # return 0 00:25:45.076 05:44:48 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:45.076 05:44:48 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:45.076 05:44:48 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:45.334 BaseBdev1_malloc 00:25:45.334 05:44:49 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:45.593 [2024-10-07 05:44:49.380611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:45.593 [2024-10-07 05:44:49.380833] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:45.593 [2024-10-07 05:44:49.380905] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:25:45.593 [2024-10-07 05:44:49.381087] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:45.593 [2024-10-07 05:44:49.383479] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:45.593 [2024-10-07 05:44:49.383721] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:45.593 BaseBdev1 00:25:45.593 05:44:49 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:45.593 05:44:49 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:45.593 05:44:49 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:45.853 BaseBdev2_malloc 00:25:45.853 05:44:49 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:45.853 [2024-10-07 05:44:49.813128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:45.853 [2024-10-07 05:44:49.813321] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:45.853 [2024-10-07 05:44:49.813403] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:25:45.853 [2024-10-07 05:44:49.813571] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:45.853 [2024-10-07 05:44:49.815869] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:45.853 [2024-10-07 05:44:49.816047] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:45.853 BaseBdev2 00:25:45.853 05:44:49 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:45.853 05:44:49 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:45.853 05:44:49 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:46.112 BaseBdev3_malloc 00:25:46.112 05:44:50 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:46.370 [2024-10-07 05:44:50.226032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:46.370 [2024-10-07 05:44:50.226225] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:46.370 [2024-10-07 05:44:50.226304] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:25:46.370 [2024-10-07 05:44:50.226457] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:46.370 [2024-10-07 05:44:50.229000] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:46.370 [2024-10-07 05:44:50.229176] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:46.370 BaseBdev3 00:25:46.370 05:44:50 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:46.370 05:44:50 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:46.370 05:44:50 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:46.629 BaseBdev4_malloc 00:25:46.629 05:44:50 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:46.888 [2024-10-07 05:44:50.647349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:46.888 [2024-10-07 05:44:50.647589] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:46.888 [2024-10-07 05:44:50.647666] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:25:46.888 [2024-10-07 05:44:50.647845] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:46.888 [2024-10-07 05:44:50.650203] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:46.888 [2024-10-07 05:44:50.650370] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:46.888 BaseBdev4 00:25:46.888 05:44:50 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:47.147 spare_malloc 00:25:47.147 05:44:50 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:47.147 spare_delay 00:25:47.147 05:44:51 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:47.406 [2024-10-07 05:44:51.292790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:47.406 [2024-10-07 05:44:51.292987] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:47.406 [2024-10-07 05:44:51.293060] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:25:47.406 [2024-10-07 05:44:51.293212] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:47.406 [2024-10-07 05:44:51.295662] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:47.406 [2024-10-07 05:44:51.295883] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:47.406 spare 00:25:47.406 05:44:51 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:47.664 [2024-10-07 05:44:51.476911] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:47.664 [2024-10-07 05:44:51.479065] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:47.664 [2024-10-07 05:44:51.479268] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:47.664 [2024-10-07 05:44:51.479369] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:47.664 [2024-10-07 05:44:51.479703] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:25:47.664 [2024-10-07 05:44:51.479758] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:47.664 [2024-10-07 05:44:51.479995] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:25:47.664 [2024-10-07 05:44:51.485697] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:25:47.664 [2024-10-07 05:44:51.485829] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:25:47.664 [2024-10-07 05:44:51.486101] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:47.664 05:44:51 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:47.664 05:44:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:47.664 05:44:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:47.664 05:44:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:47.664 05:44:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:47.665 05:44:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:47.665 05:44:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:47.665 05:44:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:47.665 05:44:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:47.665 05:44:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:47.665 05:44:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.665 05:44:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:47.923 05:44:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:47.923 "name": "raid_bdev1", 00:25:47.923 "uuid": "3ed51fa4-f8fc-4c40-9471-ca681ce53056", 00:25:47.923 "strip_size_kb": 64, 00:25:47.923 "state": "online", 00:25:47.923 "raid_level": "raid5f", 00:25:47.923 "superblock": true, 00:25:47.923 "num_base_bdevs": 4, 00:25:47.923 "num_base_bdevs_discovered": 4, 00:25:47.923 "num_base_bdevs_operational": 4, 00:25:47.923 "base_bdevs_list": [ 00:25:47.923 { 00:25:47.923 "name": "BaseBdev1", 00:25:47.923 "uuid": "44108556-dfa5-5d90-954f-67ad6f4f5d41", 00:25:47.923 "is_configured": true, 00:25:47.923 "data_offset": 2048, 00:25:47.923 "data_size": 63488 00:25:47.923 }, 00:25:47.923 { 00:25:47.923 "name": "BaseBdev2", 00:25:47.923 "uuid": "dce1cfa5-20b6-5cfd-a55f-6641dc14c438", 00:25:47.923 "is_configured": true, 00:25:47.923 "data_offset": 2048, 00:25:47.923 "data_size": 63488 00:25:47.923 }, 00:25:47.923 { 00:25:47.923 "name": "BaseBdev3", 00:25:47.923 "uuid": "2f1d37ef-060a-56dd-8b5c-f0627c07da1a", 00:25:47.923 "is_configured": true, 00:25:47.923 "data_offset": 2048, 00:25:47.923 "data_size": 63488 00:25:47.923 }, 00:25:47.923 { 00:25:47.923 "name": "BaseBdev4", 00:25:47.923 "uuid": "7f6ac9c7-9adc-5840-a1cf-7ad8911c5ff0", 00:25:47.923 "is_configured": true, 00:25:47.923 "data_offset": 2048, 00:25:47.923 "data_size": 63488 00:25:47.923 } 00:25:47.923 ] 00:25:47.923 }' 00:25:47.923 05:44:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:47.923 05:44:51 -- common/autotest_common.sh@10 -- # set +x 00:25:48.491 05:44:52 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:25:48.491 05:44:52 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:48.751 [2024-10-07 05:44:52.525017] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:48.751 05:44:52 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=190464 00:25:48.751 05:44:52 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:48.751 05:44:52 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:49.049 05:44:52 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:25:49.049 05:44:52 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:25:49.049 05:44:52 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:25:49.049 05:44:52 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:25:49.049 05:44:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:49.049 05:44:52 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:49.049 05:44:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:49.049 05:44:52 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:49.049 05:44:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:49.049 05:44:52 -- bdev/nbd_common.sh@12 -- # local i 00:25:49.049 05:44:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:49.049 05:44:52 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:49.049 05:44:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:49.049 [2024-10-07 05:44:52.949033] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:49.049 /dev/nbd0 00:25:49.049 05:44:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:49.049 05:44:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:49.049 05:44:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:49.049 05:44:52 -- common/autotest_common.sh@857 -- # local i 00:25:49.049 05:44:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:49.049 05:44:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:49.049 05:44:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:49.310 05:44:52 -- common/autotest_common.sh@861 -- # break 00:25:49.310 05:44:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:49.310 05:44:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:49.310 05:44:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:49.310 1+0 records in 00:25:49.310 1+0 records out 00:25:49.310 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284205 s, 14.4 MB/s 00:25:49.310 05:44:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:49.310 05:44:53 -- common/autotest_common.sh@874 -- # size=4096 00:25:49.310 05:44:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:49.310 05:44:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:49.310 05:44:53 -- common/autotest_common.sh@877 -- # return 0 00:25:49.310 05:44:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:49.310 05:44:53 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:49.310 05:44:53 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:25:49.310 05:44:53 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:25:49.310 05:44:53 -- bdev/bdev_raid.sh@582 -- # echo 192 00:25:49.310 05:44:53 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:25:49.877 496+0 records in 00:25:49.877 496+0 records out 00:25:49.877 97517568 bytes (98 MB, 93 MiB) copied, 0.550273 s, 177 MB/s 00:25:49.877 05:44:53 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:49.877 05:44:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:49.877 05:44:53 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:49.877 05:44:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:49.877 05:44:53 -- bdev/nbd_common.sh@51 -- # local i 00:25:49.877 05:44:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:49.877 05:44:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:49.877 05:44:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:49.877 05:44:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:49.877 05:44:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:49.877 05:44:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:49.877 05:44:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:49.877 05:44:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:49.877 [2024-10-07 05:44:53.807595] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:49.877 05:44:53 -- bdev/nbd_common.sh@41 -- # break 00:25:49.877 05:44:53 -- bdev/nbd_common.sh@45 -- # return 0 00:25:49.877 05:44:53 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:50.136 [2024-10-07 05:44:54.062552] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:50.136 05:44:54 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:50.136 05:44:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:50.136 05:44:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:50.136 05:44:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:50.136 05:44:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:50.136 05:44:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:50.136 05:44:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:50.136 05:44:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:50.136 05:44:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:50.136 05:44:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:50.136 05:44:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.136 05:44:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:50.395 05:44:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:50.395 "name": "raid_bdev1", 00:25:50.395 "uuid": "3ed51fa4-f8fc-4c40-9471-ca681ce53056", 00:25:50.395 "strip_size_kb": 64, 00:25:50.395 "state": "online", 00:25:50.395 "raid_level": "raid5f", 00:25:50.395 "superblock": true, 00:25:50.395 "num_base_bdevs": 4, 00:25:50.395 "num_base_bdevs_discovered": 3, 00:25:50.395 "num_base_bdevs_operational": 3, 00:25:50.395 "base_bdevs_list": [ 00:25:50.395 { 00:25:50.395 "name": null, 00:25:50.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.395 "is_configured": false, 00:25:50.395 "data_offset": 2048, 00:25:50.395 "data_size": 63488 00:25:50.395 }, 00:25:50.395 { 00:25:50.395 "name": "BaseBdev2", 00:25:50.395 "uuid": "dce1cfa5-20b6-5cfd-a55f-6641dc14c438", 00:25:50.395 "is_configured": true, 00:25:50.395 "data_offset": 2048, 00:25:50.395 "data_size": 63488 00:25:50.395 }, 00:25:50.395 { 00:25:50.395 "name": "BaseBdev3", 00:25:50.395 "uuid": "2f1d37ef-060a-56dd-8b5c-f0627c07da1a", 00:25:50.395 "is_configured": true, 00:25:50.395 "data_offset": 2048, 00:25:50.395 "data_size": 63488 00:25:50.395 }, 00:25:50.395 { 00:25:50.395 "name": "BaseBdev4", 00:25:50.395 "uuid": "7f6ac9c7-9adc-5840-a1cf-7ad8911c5ff0", 00:25:50.395 "is_configured": true, 00:25:50.395 "data_offset": 2048, 00:25:50.395 "data_size": 63488 00:25:50.395 } 00:25:50.395 ] 00:25:50.395 }' 00:25:50.395 05:44:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:50.395 05:44:54 -- common/autotest_common.sh@10 -- # set +x 00:25:50.962 05:44:54 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:51.220 [2024-10-07 05:44:55.070752] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:51.220 [2024-10-07 05:44:55.070798] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:51.220 [2024-10-07 05:44:55.081341] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a710 00:25:51.220 [2024-10-07 05:44:55.088542] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:51.220 05:44:55 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:25:52.156 05:44:56 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:52.156 05:44:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:52.156 05:44:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:52.156 05:44:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:52.156 05:44:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:52.156 05:44:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.156 05:44:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:52.415 05:44:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:52.415 "name": "raid_bdev1", 00:25:52.415 "uuid": "3ed51fa4-f8fc-4c40-9471-ca681ce53056", 00:25:52.415 "strip_size_kb": 64, 00:25:52.415 "state": "online", 00:25:52.415 "raid_level": "raid5f", 00:25:52.415 "superblock": true, 00:25:52.415 "num_base_bdevs": 4, 00:25:52.415 "num_base_bdevs_discovered": 4, 00:25:52.415 "num_base_bdevs_operational": 4, 00:25:52.415 "process": { 00:25:52.415 "type": "rebuild", 00:25:52.415 "target": "spare", 00:25:52.415 "progress": { 00:25:52.415 "blocks": 23040, 00:25:52.415 "percent": 12 00:25:52.415 } 00:25:52.415 }, 00:25:52.415 "base_bdevs_list": [ 00:25:52.415 { 00:25:52.415 "name": "spare", 00:25:52.415 "uuid": "0fba8366-fcd9-5d21-abd1-7415be62938a", 00:25:52.415 "is_configured": true, 00:25:52.415 "data_offset": 2048, 00:25:52.415 "data_size": 63488 00:25:52.415 }, 00:25:52.415 { 00:25:52.415 "name": "BaseBdev2", 00:25:52.415 "uuid": "dce1cfa5-20b6-5cfd-a55f-6641dc14c438", 00:25:52.415 "is_configured": true, 00:25:52.415 "data_offset": 2048, 00:25:52.415 "data_size": 63488 00:25:52.415 }, 00:25:52.415 { 00:25:52.415 "name": "BaseBdev3", 00:25:52.415 "uuid": "2f1d37ef-060a-56dd-8b5c-f0627c07da1a", 00:25:52.415 "is_configured": true, 00:25:52.415 "data_offset": 2048, 00:25:52.415 "data_size": 63488 00:25:52.415 }, 00:25:52.415 { 00:25:52.415 "name": "BaseBdev4", 00:25:52.415 "uuid": "7f6ac9c7-9adc-5840-a1cf-7ad8911c5ff0", 00:25:52.415 "is_configured": true, 00:25:52.415 "data_offset": 2048, 00:25:52.415 "data_size": 63488 00:25:52.415 } 00:25:52.415 ] 00:25:52.415 }' 00:25:52.415 05:44:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:52.415 05:44:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:52.415 05:44:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:52.674 05:44:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:52.674 05:44:56 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:52.932 [2024-10-07 05:44:56.673578] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:52.932 [2024-10-07 05:44:56.700589] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:52.932 [2024-10-07 05:44:56.700660] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:52.932 05:44:56 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:52.932 05:44:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:52.932 05:44:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:52.932 05:44:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:52.932 05:44:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:52.932 05:44:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:52.932 05:44:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:52.932 05:44:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:52.932 05:44:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:52.932 05:44:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:52.932 05:44:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:52.932 05:44:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.192 05:44:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:53.192 "name": "raid_bdev1", 00:25:53.192 "uuid": "3ed51fa4-f8fc-4c40-9471-ca681ce53056", 00:25:53.192 "strip_size_kb": 64, 00:25:53.192 "state": "online", 00:25:53.192 "raid_level": "raid5f", 00:25:53.192 "superblock": true, 00:25:53.192 "num_base_bdevs": 4, 00:25:53.192 "num_base_bdevs_discovered": 3, 00:25:53.192 "num_base_bdevs_operational": 3, 00:25:53.192 "base_bdevs_list": [ 00:25:53.192 { 00:25:53.192 "name": null, 00:25:53.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.192 "is_configured": false, 00:25:53.192 "data_offset": 2048, 00:25:53.192 "data_size": 63488 00:25:53.192 }, 00:25:53.192 { 00:25:53.192 "name": "BaseBdev2", 00:25:53.192 "uuid": "dce1cfa5-20b6-5cfd-a55f-6641dc14c438", 00:25:53.192 "is_configured": true, 00:25:53.192 "data_offset": 2048, 00:25:53.192 "data_size": 63488 00:25:53.192 }, 00:25:53.192 { 00:25:53.192 "name": "BaseBdev3", 00:25:53.192 "uuid": "2f1d37ef-060a-56dd-8b5c-f0627c07da1a", 00:25:53.192 "is_configured": true, 00:25:53.192 "data_offset": 2048, 00:25:53.192 "data_size": 63488 00:25:53.192 }, 00:25:53.192 { 00:25:53.192 "name": "BaseBdev4", 00:25:53.192 "uuid": "7f6ac9c7-9adc-5840-a1cf-7ad8911c5ff0", 00:25:53.192 "is_configured": true, 00:25:53.192 "data_offset": 2048, 00:25:53.192 "data_size": 63488 00:25:53.192 } 00:25:53.192 ] 00:25:53.192 }' 00:25:53.192 05:44:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:53.192 05:44:56 -- common/autotest_common.sh@10 -- # set +x 00:25:53.760 05:44:57 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:53.760 05:44:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:53.760 05:44:57 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:53.760 05:44:57 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:53.760 05:44:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:53.760 05:44:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.760 05:44:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:54.018 05:44:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:54.018 "name": "raid_bdev1", 00:25:54.018 "uuid": "3ed51fa4-f8fc-4c40-9471-ca681ce53056", 00:25:54.018 "strip_size_kb": 64, 00:25:54.018 "state": "online", 00:25:54.018 "raid_level": "raid5f", 00:25:54.018 "superblock": true, 00:25:54.018 "num_base_bdevs": 4, 00:25:54.018 "num_base_bdevs_discovered": 3, 00:25:54.018 "num_base_bdevs_operational": 3, 00:25:54.018 "base_bdevs_list": [ 00:25:54.018 { 00:25:54.018 "name": null, 00:25:54.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.018 "is_configured": false, 00:25:54.018 "data_offset": 2048, 00:25:54.018 "data_size": 63488 00:25:54.018 }, 00:25:54.018 { 00:25:54.018 "name": "BaseBdev2", 00:25:54.018 "uuid": "dce1cfa5-20b6-5cfd-a55f-6641dc14c438", 00:25:54.018 "is_configured": true, 00:25:54.019 "data_offset": 2048, 00:25:54.019 "data_size": 63488 00:25:54.019 }, 00:25:54.019 { 00:25:54.019 "name": "BaseBdev3", 00:25:54.019 "uuid": "2f1d37ef-060a-56dd-8b5c-f0627c07da1a", 00:25:54.019 "is_configured": true, 00:25:54.019 "data_offset": 2048, 00:25:54.019 "data_size": 63488 00:25:54.019 }, 00:25:54.019 { 00:25:54.019 "name": "BaseBdev4", 00:25:54.019 "uuid": "7f6ac9c7-9adc-5840-a1cf-7ad8911c5ff0", 00:25:54.019 "is_configured": true, 00:25:54.019 "data_offset": 2048, 00:25:54.019 "data_size": 63488 00:25:54.019 } 00:25:54.019 ] 00:25:54.019 }' 00:25:54.019 05:44:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:54.019 05:44:57 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:54.019 05:44:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:54.019 05:44:57 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:54.019 05:44:57 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:54.277 [2024-10-07 05:44:58.063729] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:54.277 [2024-10-07 05:44:58.063769] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:54.277 [2024-10-07 05:44:58.073277] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a8b0 00:25:54.277 [2024-10-07 05:44:58.080357] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:54.277 05:44:58 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:25:55.211 05:44:59 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:55.211 05:44:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:55.211 05:44:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:55.211 05:44:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:55.211 05:44:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:55.211 05:44:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.211 05:44:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:55.469 05:44:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:55.469 "name": "raid_bdev1", 00:25:55.469 "uuid": "3ed51fa4-f8fc-4c40-9471-ca681ce53056", 00:25:55.469 "strip_size_kb": 64, 00:25:55.469 "state": "online", 00:25:55.469 "raid_level": "raid5f", 00:25:55.469 "superblock": true, 00:25:55.469 "num_base_bdevs": 4, 00:25:55.469 "num_base_bdevs_discovered": 4, 00:25:55.469 "num_base_bdevs_operational": 4, 00:25:55.469 "process": { 00:25:55.469 "type": "rebuild", 00:25:55.469 "target": "spare", 00:25:55.469 "progress": { 00:25:55.469 "blocks": 23040, 00:25:55.469 "percent": 12 00:25:55.469 } 00:25:55.469 }, 00:25:55.469 "base_bdevs_list": [ 00:25:55.469 { 00:25:55.469 "name": "spare", 00:25:55.469 "uuid": "0fba8366-fcd9-5d21-abd1-7415be62938a", 00:25:55.469 "is_configured": true, 00:25:55.469 "data_offset": 2048, 00:25:55.469 "data_size": 63488 00:25:55.469 }, 00:25:55.469 { 00:25:55.469 "name": "BaseBdev2", 00:25:55.469 "uuid": "dce1cfa5-20b6-5cfd-a55f-6641dc14c438", 00:25:55.469 "is_configured": true, 00:25:55.469 "data_offset": 2048, 00:25:55.469 "data_size": 63488 00:25:55.469 }, 00:25:55.469 { 00:25:55.469 "name": "BaseBdev3", 00:25:55.469 "uuid": "2f1d37ef-060a-56dd-8b5c-f0627c07da1a", 00:25:55.469 "is_configured": true, 00:25:55.469 "data_offset": 2048, 00:25:55.469 "data_size": 63488 00:25:55.469 }, 00:25:55.469 { 00:25:55.469 "name": "BaseBdev4", 00:25:55.470 "uuid": "7f6ac9c7-9adc-5840-a1cf-7ad8911c5ff0", 00:25:55.470 "is_configured": true, 00:25:55.470 "data_offset": 2048, 00:25:55.470 "data_size": 63488 00:25:55.470 } 00:25:55.470 ] 00:25:55.470 }' 00:25:55.470 05:44:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:55.470 05:44:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:55.470 05:44:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:55.470 05:44:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:55.470 05:44:59 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:25:55.470 05:44:59 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:25:55.470 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:25:55.470 05:44:59 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:25:55.470 05:44:59 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:25:55.470 05:44:59 -- bdev/bdev_raid.sh@657 -- # local timeout=739 00:25:55.470 05:44:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:55.470 05:44:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:55.470 05:44:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:55.470 05:44:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:55.470 05:44:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:55.470 05:44:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:55.470 05:44:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:55.470 05:44:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.729 05:44:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:55.729 "name": "raid_bdev1", 00:25:55.729 "uuid": "3ed51fa4-f8fc-4c40-9471-ca681ce53056", 00:25:55.729 "strip_size_kb": 64, 00:25:55.729 "state": "online", 00:25:55.729 "raid_level": "raid5f", 00:25:55.729 "superblock": true, 00:25:55.729 "num_base_bdevs": 4, 00:25:55.729 "num_base_bdevs_discovered": 4, 00:25:55.729 "num_base_bdevs_operational": 4, 00:25:55.729 "process": { 00:25:55.729 "type": "rebuild", 00:25:55.729 "target": "spare", 00:25:55.729 "progress": { 00:25:55.729 "blocks": 28800, 00:25:55.729 "percent": 15 00:25:55.729 } 00:25:55.729 }, 00:25:55.729 "base_bdevs_list": [ 00:25:55.729 { 00:25:55.729 "name": "spare", 00:25:55.729 "uuid": "0fba8366-fcd9-5d21-abd1-7415be62938a", 00:25:55.729 "is_configured": true, 00:25:55.729 "data_offset": 2048, 00:25:55.729 "data_size": 63488 00:25:55.729 }, 00:25:55.729 { 00:25:55.729 "name": "BaseBdev2", 00:25:55.729 "uuid": "dce1cfa5-20b6-5cfd-a55f-6641dc14c438", 00:25:55.729 "is_configured": true, 00:25:55.729 "data_offset": 2048, 00:25:55.729 "data_size": 63488 00:25:55.729 }, 00:25:55.729 { 00:25:55.729 "name": "BaseBdev3", 00:25:55.729 "uuid": "2f1d37ef-060a-56dd-8b5c-f0627c07da1a", 00:25:55.729 "is_configured": true, 00:25:55.729 "data_offset": 2048, 00:25:55.729 "data_size": 63488 00:25:55.729 }, 00:25:55.729 { 00:25:55.729 "name": "BaseBdev4", 00:25:55.729 "uuid": "7f6ac9c7-9adc-5840-a1cf-7ad8911c5ff0", 00:25:55.729 "is_configured": true, 00:25:55.729 "data_offset": 2048, 00:25:55.729 "data_size": 63488 00:25:55.729 } 00:25:55.729 ] 00:25:55.729 }' 00:25:55.729 05:44:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:55.729 05:44:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:55.729 05:44:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:55.988 05:44:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:55.988 05:44:59 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:56.926 05:45:00 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:56.926 05:45:00 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:56.926 05:45:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:56.926 05:45:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:56.926 05:45:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:56.926 05:45:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:56.926 05:45:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.926 05:45:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.185 05:45:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:57.185 "name": "raid_bdev1", 00:25:57.185 "uuid": "3ed51fa4-f8fc-4c40-9471-ca681ce53056", 00:25:57.185 "strip_size_kb": 64, 00:25:57.185 "state": "online", 00:25:57.185 "raid_level": "raid5f", 00:25:57.185 "superblock": true, 00:25:57.185 "num_base_bdevs": 4, 00:25:57.185 "num_base_bdevs_discovered": 4, 00:25:57.185 "num_base_bdevs_operational": 4, 00:25:57.185 "process": { 00:25:57.185 "type": "rebuild", 00:25:57.185 "target": "spare", 00:25:57.185 "progress": { 00:25:57.185 "blocks": 53760, 00:25:57.185 "percent": 28 00:25:57.185 } 00:25:57.185 }, 00:25:57.185 "base_bdevs_list": [ 00:25:57.185 { 00:25:57.185 "name": "spare", 00:25:57.185 "uuid": "0fba8366-fcd9-5d21-abd1-7415be62938a", 00:25:57.185 "is_configured": true, 00:25:57.185 "data_offset": 2048, 00:25:57.185 "data_size": 63488 00:25:57.185 }, 00:25:57.185 { 00:25:57.185 "name": "BaseBdev2", 00:25:57.185 "uuid": "dce1cfa5-20b6-5cfd-a55f-6641dc14c438", 00:25:57.185 "is_configured": true, 00:25:57.185 "data_offset": 2048, 00:25:57.185 "data_size": 63488 00:25:57.185 }, 00:25:57.185 { 00:25:57.185 "name": "BaseBdev3", 00:25:57.185 "uuid": "2f1d37ef-060a-56dd-8b5c-f0627c07da1a", 00:25:57.185 "is_configured": true, 00:25:57.185 "data_offset": 2048, 00:25:57.185 "data_size": 63488 00:25:57.185 }, 00:25:57.186 { 00:25:57.186 "name": "BaseBdev4", 00:25:57.186 "uuid": "7f6ac9c7-9adc-5840-a1cf-7ad8911c5ff0", 00:25:57.186 "is_configured": true, 00:25:57.186 "data_offset": 2048, 00:25:57.186 "data_size": 63488 00:25:57.186 } 00:25:57.186 ] 00:25:57.186 }' 00:25:57.186 05:45:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:57.186 05:45:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:57.186 05:45:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:57.186 05:45:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:57.186 05:45:01 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:58.563 05:45:02 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:58.563 05:45:02 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:58.563 05:45:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:58.563 05:45:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:58.563 05:45:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:58.563 05:45:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:58.563 05:45:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.563 05:45:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:58.563 05:45:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:58.563 "name": "raid_bdev1", 00:25:58.563 "uuid": "3ed51fa4-f8fc-4c40-9471-ca681ce53056", 00:25:58.563 "strip_size_kb": 64, 00:25:58.563 "state": "online", 00:25:58.563 "raid_level": "raid5f", 00:25:58.563 "superblock": true, 00:25:58.563 "num_base_bdevs": 4, 00:25:58.563 "num_base_bdevs_discovered": 4, 00:25:58.563 "num_base_bdevs_operational": 4, 00:25:58.563 "process": { 00:25:58.563 "type": "rebuild", 00:25:58.563 "target": "spare", 00:25:58.563 "progress": { 00:25:58.563 "blocks": 80640, 00:25:58.563 "percent": 42 00:25:58.563 } 00:25:58.563 }, 00:25:58.563 "base_bdevs_list": [ 00:25:58.563 { 00:25:58.563 "name": "spare", 00:25:58.563 "uuid": "0fba8366-fcd9-5d21-abd1-7415be62938a", 00:25:58.563 "is_configured": true, 00:25:58.563 "data_offset": 2048, 00:25:58.563 "data_size": 63488 00:25:58.563 }, 00:25:58.563 { 00:25:58.563 "name": "BaseBdev2", 00:25:58.563 "uuid": "dce1cfa5-20b6-5cfd-a55f-6641dc14c438", 00:25:58.563 "is_configured": true, 00:25:58.563 "data_offset": 2048, 00:25:58.563 "data_size": 63488 00:25:58.563 }, 00:25:58.563 { 00:25:58.563 "name": "BaseBdev3", 00:25:58.563 "uuid": "2f1d37ef-060a-56dd-8b5c-f0627c07da1a", 00:25:58.563 "is_configured": true, 00:25:58.563 "data_offset": 2048, 00:25:58.563 "data_size": 63488 00:25:58.563 }, 00:25:58.563 { 00:25:58.563 "name": "BaseBdev4", 00:25:58.563 "uuid": "7f6ac9c7-9adc-5840-a1cf-7ad8911c5ff0", 00:25:58.563 "is_configured": true, 00:25:58.563 "data_offset": 2048, 00:25:58.563 "data_size": 63488 00:25:58.563 } 00:25:58.563 ] 00:25:58.563 }' 00:25:58.563 05:45:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:58.563 05:45:02 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:58.563 05:45:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:58.563 05:45:02 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:58.563 05:45:02 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:59.500 05:45:03 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:59.500 05:45:03 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:59.500 05:45:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:59.500 05:45:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:59.500 05:45:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:59.500 05:45:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:59.500 05:45:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.500 05:45:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:59.758 05:45:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:59.758 "name": "raid_bdev1", 00:25:59.758 "uuid": "3ed51fa4-f8fc-4c40-9471-ca681ce53056", 00:25:59.758 "strip_size_kb": 64, 00:25:59.758 "state": "online", 00:25:59.758 "raid_level": "raid5f", 00:25:59.758 "superblock": true, 00:25:59.758 "num_base_bdevs": 4, 00:25:59.758 "num_base_bdevs_discovered": 4, 00:25:59.758 "num_base_bdevs_operational": 4, 00:25:59.758 "process": { 00:25:59.758 "type": "rebuild", 00:25:59.758 "target": "spare", 00:25:59.758 "progress": { 00:25:59.758 "blocks": 105600, 00:25:59.758 "percent": 55 00:25:59.758 } 00:25:59.758 }, 00:25:59.758 "base_bdevs_list": [ 00:25:59.758 { 00:25:59.758 "name": "spare", 00:25:59.758 "uuid": "0fba8366-fcd9-5d21-abd1-7415be62938a", 00:25:59.758 "is_configured": true, 00:25:59.758 "data_offset": 2048, 00:25:59.758 "data_size": 63488 00:25:59.758 }, 00:25:59.758 { 00:25:59.758 "name": "BaseBdev2", 00:25:59.758 "uuid": "dce1cfa5-20b6-5cfd-a55f-6641dc14c438", 00:25:59.758 "is_configured": true, 00:25:59.758 "data_offset": 2048, 00:25:59.758 "data_size": 63488 00:25:59.758 }, 00:25:59.758 { 00:25:59.758 "name": "BaseBdev3", 00:25:59.758 "uuid": "2f1d37ef-060a-56dd-8b5c-f0627c07da1a", 00:25:59.758 "is_configured": true, 00:25:59.758 "data_offset": 2048, 00:25:59.758 "data_size": 63488 00:25:59.758 }, 00:25:59.758 { 00:25:59.758 "name": "BaseBdev4", 00:25:59.758 "uuid": "7f6ac9c7-9adc-5840-a1cf-7ad8911c5ff0", 00:25:59.758 "is_configured": true, 00:25:59.758 "data_offset": 2048, 00:25:59.758 "data_size": 63488 00:25:59.758 } 00:25:59.758 ] 00:25:59.758 }' 00:25:59.758 05:45:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:59.758 05:45:03 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:59.758 05:45:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:00.017 05:45:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:00.017 05:45:03 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:00.953 05:45:04 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:00.953 05:45:04 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:00.953 05:45:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:00.953 05:45:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:00.953 05:45:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:00.953 05:45:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:00.953 05:45:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.953 05:45:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:01.212 05:45:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:01.212 "name": "raid_bdev1", 00:26:01.212 "uuid": "3ed51fa4-f8fc-4c40-9471-ca681ce53056", 00:26:01.212 "strip_size_kb": 64, 00:26:01.212 "state": "online", 00:26:01.212 "raid_level": "raid5f", 00:26:01.212 "superblock": true, 00:26:01.212 "num_base_bdevs": 4, 00:26:01.212 "num_base_bdevs_discovered": 4, 00:26:01.212 "num_base_bdevs_operational": 4, 00:26:01.212 "process": { 00:26:01.212 "type": "rebuild", 00:26:01.212 "target": "spare", 00:26:01.212 "progress": { 00:26:01.212 "blocks": 130560, 00:26:01.212 "percent": 68 00:26:01.212 } 00:26:01.212 }, 00:26:01.212 "base_bdevs_list": [ 00:26:01.212 { 00:26:01.212 "name": "spare", 00:26:01.212 "uuid": "0fba8366-fcd9-5d21-abd1-7415be62938a", 00:26:01.212 "is_configured": true, 00:26:01.212 "data_offset": 2048, 00:26:01.212 "data_size": 63488 00:26:01.212 }, 00:26:01.212 { 00:26:01.212 "name": "BaseBdev2", 00:26:01.212 "uuid": "dce1cfa5-20b6-5cfd-a55f-6641dc14c438", 00:26:01.212 "is_configured": true, 00:26:01.212 "data_offset": 2048, 00:26:01.212 "data_size": 63488 00:26:01.212 }, 00:26:01.212 { 00:26:01.212 "name": "BaseBdev3", 00:26:01.212 "uuid": "2f1d37ef-060a-56dd-8b5c-f0627c07da1a", 00:26:01.212 "is_configured": true, 00:26:01.212 "data_offset": 2048, 00:26:01.212 "data_size": 63488 00:26:01.212 }, 00:26:01.212 { 00:26:01.212 "name": "BaseBdev4", 00:26:01.212 "uuid": "7f6ac9c7-9adc-5840-a1cf-7ad8911c5ff0", 00:26:01.212 "is_configured": true, 00:26:01.212 "data_offset": 2048, 00:26:01.212 "data_size": 63488 00:26:01.212 } 00:26:01.212 ] 00:26:01.212 }' 00:26:01.212 05:45:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:01.212 05:45:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:01.212 05:45:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:01.212 05:45:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:01.212 05:45:05 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:02.148 05:45:06 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:02.148 05:45:06 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:02.148 05:45:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:02.148 05:45:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:02.148 05:45:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:02.148 05:45:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:02.148 05:45:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.148 05:45:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:02.407 05:45:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:02.407 "name": "raid_bdev1", 00:26:02.407 "uuid": "3ed51fa4-f8fc-4c40-9471-ca681ce53056", 00:26:02.407 "strip_size_kb": 64, 00:26:02.407 "state": "online", 00:26:02.407 "raid_level": "raid5f", 00:26:02.407 "superblock": true, 00:26:02.407 "num_base_bdevs": 4, 00:26:02.407 "num_base_bdevs_discovered": 4, 00:26:02.407 "num_base_bdevs_operational": 4, 00:26:02.407 "process": { 00:26:02.407 "type": "rebuild", 00:26:02.407 "target": "spare", 00:26:02.407 "progress": { 00:26:02.407 "blocks": 155520, 00:26:02.407 "percent": 81 00:26:02.407 } 00:26:02.407 }, 00:26:02.407 "base_bdevs_list": [ 00:26:02.407 { 00:26:02.407 "name": "spare", 00:26:02.407 "uuid": "0fba8366-fcd9-5d21-abd1-7415be62938a", 00:26:02.407 "is_configured": true, 00:26:02.407 "data_offset": 2048, 00:26:02.407 "data_size": 63488 00:26:02.407 }, 00:26:02.407 { 00:26:02.407 "name": "BaseBdev2", 00:26:02.407 "uuid": "dce1cfa5-20b6-5cfd-a55f-6641dc14c438", 00:26:02.407 "is_configured": true, 00:26:02.407 "data_offset": 2048, 00:26:02.407 "data_size": 63488 00:26:02.407 }, 00:26:02.407 { 00:26:02.407 "name": "BaseBdev3", 00:26:02.407 "uuid": "2f1d37ef-060a-56dd-8b5c-f0627c07da1a", 00:26:02.407 "is_configured": true, 00:26:02.407 "data_offset": 2048, 00:26:02.407 "data_size": 63488 00:26:02.407 }, 00:26:02.407 { 00:26:02.407 "name": "BaseBdev4", 00:26:02.407 "uuid": "7f6ac9c7-9adc-5840-a1cf-7ad8911c5ff0", 00:26:02.407 "is_configured": true, 00:26:02.407 "data_offset": 2048, 00:26:02.407 "data_size": 63488 00:26:02.407 } 00:26:02.407 ] 00:26:02.407 }' 00:26:02.407 05:45:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:02.666 05:45:06 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:02.666 05:45:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:02.666 05:45:06 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:02.666 05:45:06 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:03.601 05:45:07 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:03.601 05:45:07 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:03.601 05:45:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:03.601 05:45:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:03.601 05:45:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:03.601 05:45:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:03.601 05:45:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.601 05:45:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:03.860 05:45:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:03.860 "name": "raid_bdev1", 00:26:03.860 "uuid": "3ed51fa4-f8fc-4c40-9471-ca681ce53056", 00:26:03.860 "strip_size_kb": 64, 00:26:03.860 "state": "online", 00:26:03.860 "raid_level": "raid5f", 00:26:03.860 "superblock": true, 00:26:03.860 "num_base_bdevs": 4, 00:26:03.860 "num_base_bdevs_discovered": 4, 00:26:03.860 "num_base_bdevs_operational": 4, 00:26:03.860 "process": { 00:26:03.860 "type": "rebuild", 00:26:03.860 "target": "spare", 00:26:03.860 "progress": { 00:26:03.860 "blocks": 182400, 00:26:03.860 "percent": 95 00:26:03.860 } 00:26:03.860 }, 00:26:03.860 "base_bdevs_list": [ 00:26:03.860 { 00:26:03.860 "name": "spare", 00:26:03.860 "uuid": "0fba8366-fcd9-5d21-abd1-7415be62938a", 00:26:03.860 "is_configured": true, 00:26:03.860 "data_offset": 2048, 00:26:03.860 "data_size": 63488 00:26:03.860 }, 00:26:03.860 { 00:26:03.860 "name": "BaseBdev2", 00:26:03.860 "uuid": "dce1cfa5-20b6-5cfd-a55f-6641dc14c438", 00:26:03.860 "is_configured": true, 00:26:03.860 "data_offset": 2048, 00:26:03.860 "data_size": 63488 00:26:03.860 }, 00:26:03.860 { 00:26:03.860 "name": "BaseBdev3", 00:26:03.860 "uuid": "2f1d37ef-060a-56dd-8b5c-f0627c07da1a", 00:26:03.860 "is_configured": true, 00:26:03.860 "data_offset": 2048, 00:26:03.860 "data_size": 63488 00:26:03.860 }, 00:26:03.860 { 00:26:03.860 "name": "BaseBdev4", 00:26:03.860 "uuid": "7f6ac9c7-9adc-5840-a1cf-7ad8911c5ff0", 00:26:03.860 "is_configured": true, 00:26:03.860 "data_offset": 2048, 00:26:03.860 "data_size": 63488 00:26:03.860 } 00:26:03.860 ] 00:26:03.860 }' 00:26:03.860 05:45:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:03.860 05:45:07 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:03.860 05:45:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:03.860 05:45:07 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:03.860 05:45:07 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:04.436 [2024-10-07 05:45:08.153228] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:04.436 [2024-10-07 05:45:08.153298] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:04.436 [2024-10-07 05:45:08.153488] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:05.054 05:45:08 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:05.054 05:45:08 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:05.054 05:45:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:05.054 05:45:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:05.054 05:45:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:05.054 05:45:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:05.054 05:45:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.054 05:45:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:05.054 05:45:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:05.054 "name": "raid_bdev1", 00:26:05.054 "uuid": "3ed51fa4-f8fc-4c40-9471-ca681ce53056", 00:26:05.054 "strip_size_kb": 64, 00:26:05.054 "state": "online", 00:26:05.054 "raid_level": "raid5f", 00:26:05.054 "superblock": true, 00:26:05.054 "num_base_bdevs": 4, 00:26:05.054 "num_base_bdevs_discovered": 4, 00:26:05.054 "num_base_bdevs_operational": 4, 00:26:05.054 "base_bdevs_list": [ 00:26:05.054 { 00:26:05.054 "name": "spare", 00:26:05.054 "uuid": "0fba8366-fcd9-5d21-abd1-7415be62938a", 00:26:05.054 "is_configured": true, 00:26:05.054 "data_offset": 2048, 00:26:05.054 "data_size": 63488 00:26:05.054 }, 00:26:05.054 { 00:26:05.054 "name": "BaseBdev2", 00:26:05.054 "uuid": "dce1cfa5-20b6-5cfd-a55f-6641dc14c438", 00:26:05.054 "is_configured": true, 00:26:05.054 "data_offset": 2048, 00:26:05.054 "data_size": 63488 00:26:05.054 }, 00:26:05.054 { 00:26:05.054 "name": "BaseBdev3", 00:26:05.054 "uuid": "2f1d37ef-060a-56dd-8b5c-f0627c07da1a", 00:26:05.054 "is_configured": true, 00:26:05.054 "data_offset": 2048, 00:26:05.054 "data_size": 63488 00:26:05.054 }, 00:26:05.054 { 00:26:05.054 "name": "BaseBdev4", 00:26:05.054 "uuid": "7f6ac9c7-9adc-5840-a1cf-7ad8911c5ff0", 00:26:05.054 "is_configured": true, 00:26:05.054 "data_offset": 2048, 00:26:05.054 "data_size": 63488 00:26:05.054 } 00:26:05.054 ] 00:26:05.054 }' 00:26:05.054 05:45:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:05.312 05:45:09 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:05.312 05:45:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:05.312 05:45:09 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:26:05.312 05:45:09 -- bdev/bdev_raid.sh@660 -- # break 00:26:05.312 05:45:09 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:05.312 05:45:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:05.312 05:45:09 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:05.312 05:45:09 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:05.312 05:45:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:05.312 05:45:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.312 05:45:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:05.571 05:45:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:05.571 "name": "raid_bdev1", 00:26:05.571 "uuid": "3ed51fa4-f8fc-4c40-9471-ca681ce53056", 00:26:05.571 "strip_size_kb": 64, 00:26:05.571 "state": "online", 00:26:05.571 "raid_level": "raid5f", 00:26:05.571 "superblock": true, 00:26:05.571 "num_base_bdevs": 4, 00:26:05.571 "num_base_bdevs_discovered": 4, 00:26:05.571 "num_base_bdevs_operational": 4, 00:26:05.571 "base_bdevs_list": [ 00:26:05.571 { 00:26:05.571 "name": "spare", 00:26:05.571 "uuid": "0fba8366-fcd9-5d21-abd1-7415be62938a", 00:26:05.571 "is_configured": true, 00:26:05.571 "data_offset": 2048, 00:26:05.571 "data_size": 63488 00:26:05.571 }, 00:26:05.571 { 00:26:05.571 "name": "BaseBdev2", 00:26:05.571 "uuid": "dce1cfa5-20b6-5cfd-a55f-6641dc14c438", 00:26:05.571 "is_configured": true, 00:26:05.571 "data_offset": 2048, 00:26:05.571 "data_size": 63488 00:26:05.571 }, 00:26:05.571 { 00:26:05.571 "name": "BaseBdev3", 00:26:05.571 "uuid": "2f1d37ef-060a-56dd-8b5c-f0627c07da1a", 00:26:05.571 "is_configured": true, 00:26:05.571 "data_offset": 2048, 00:26:05.571 "data_size": 63488 00:26:05.571 }, 00:26:05.571 { 00:26:05.571 "name": "BaseBdev4", 00:26:05.571 "uuid": "7f6ac9c7-9adc-5840-a1cf-7ad8911c5ff0", 00:26:05.571 "is_configured": true, 00:26:05.571 "data_offset": 2048, 00:26:05.571 "data_size": 63488 00:26:05.571 } 00:26:05.571 ] 00:26:05.571 }' 00:26:05.571 05:45:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:05.571 05:45:09 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:05.571 05:45:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:05.571 05:45:09 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:05.571 05:45:09 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:05.571 05:45:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:05.571 05:45:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:05.571 05:45:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:05.571 05:45:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:05.571 05:45:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:05.571 05:45:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:05.571 05:45:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:05.571 05:45:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:05.571 05:45:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:05.571 05:45:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.571 05:45:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:05.830 05:45:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:05.830 "name": "raid_bdev1", 00:26:05.830 "uuid": "3ed51fa4-f8fc-4c40-9471-ca681ce53056", 00:26:05.830 "strip_size_kb": 64, 00:26:05.830 "state": "online", 00:26:05.830 "raid_level": "raid5f", 00:26:05.830 "superblock": true, 00:26:05.830 "num_base_bdevs": 4, 00:26:05.830 "num_base_bdevs_discovered": 4, 00:26:05.830 "num_base_bdevs_operational": 4, 00:26:05.830 "base_bdevs_list": [ 00:26:05.830 { 00:26:05.830 "name": "spare", 00:26:05.830 "uuid": "0fba8366-fcd9-5d21-abd1-7415be62938a", 00:26:05.830 "is_configured": true, 00:26:05.830 "data_offset": 2048, 00:26:05.830 "data_size": 63488 00:26:05.830 }, 00:26:05.830 { 00:26:05.830 "name": "BaseBdev2", 00:26:05.830 "uuid": "dce1cfa5-20b6-5cfd-a55f-6641dc14c438", 00:26:05.830 "is_configured": true, 00:26:05.830 "data_offset": 2048, 00:26:05.830 "data_size": 63488 00:26:05.830 }, 00:26:05.830 { 00:26:05.830 "name": "BaseBdev3", 00:26:05.830 "uuid": "2f1d37ef-060a-56dd-8b5c-f0627c07da1a", 00:26:05.830 "is_configured": true, 00:26:05.830 "data_offset": 2048, 00:26:05.830 "data_size": 63488 00:26:05.830 }, 00:26:05.830 { 00:26:05.830 "name": "BaseBdev4", 00:26:05.830 "uuid": "7f6ac9c7-9adc-5840-a1cf-7ad8911c5ff0", 00:26:05.830 "is_configured": true, 00:26:05.830 "data_offset": 2048, 00:26:05.830 "data_size": 63488 00:26:05.830 } 00:26:05.830 ] 00:26:05.830 }' 00:26:05.830 05:45:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:05.830 05:45:09 -- common/autotest_common.sh@10 -- # set +x 00:26:06.398 05:45:10 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:06.658 [2024-10-07 05:45:10.567102] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:06.658 [2024-10-07 05:45:10.567137] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:06.658 [2024-10-07 05:45:10.567224] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:06.658 [2024-10-07 05:45:10.567332] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:06.658 [2024-10-07 05:45:10.567347] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:26:06.658 05:45:10 -- bdev/bdev_raid.sh@671 -- # jq length 00:26:06.658 05:45:10 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:06.917 05:45:10 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:26:06.917 05:45:10 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:26:06.917 05:45:10 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:06.917 05:45:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:06.917 05:45:10 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:06.917 05:45:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:06.917 05:45:10 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:06.917 05:45:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:06.917 05:45:10 -- bdev/nbd_common.sh@12 -- # local i 00:26:06.917 05:45:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:06.917 05:45:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:06.917 05:45:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:07.176 /dev/nbd0 00:26:07.176 05:45:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:07.176 05:45:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:07.176 05:45:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:26:07.176 05:45:11 -- common/autotest_common.sh@857 -- # local i 00:26:07.176 05:45:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:07.176 05:45:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:07.176 05:45:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:26:07.176 05:45:11 -- common/autotest_common.sh@861 -- # break 00:26:07.176 05:45:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:07.176 05:45:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:07.176 05:45:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:07.176 1+0 records in 00:26:07.176 1+0 records out 00:26:07.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514268 s, 8.0 MB/s 00:26:07.176 05:45:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:07.176 05:45:11 -- common/autotest_common.sh@874 -- # size=4096 00:26:07.176 05:45:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:07.176 05:45:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:07.176 05:45:11 -- common/autotest_common.sh@877 -- # return 0 00:26:07.176 05:45:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:07.176 05:45:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:07.176 05:45:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:26:07.446 /dev/nbd1 00:26:07.446 05:45:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:07.446 05:45:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:07.446 05:45:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:26:07.446 05:45:11 -- common/autotest_common.sh@857 -- # local i 00:26:07.446 05:45:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:07.446 05:45:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:07.446 05:45:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:26:07.446 05:45:11 -- common/autotest_common.sh@861 -- # break 00:26:07.446 05:45:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:07.446 05:45:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:07.446 05:45:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:07.446 1+0 records in 00:26:07.446 1+0 records out 00:26:07.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000544857 s, 7.5 MB/s 00:26:07.446 05:45:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:07.446 05:45:11 -- common/autotest_common.sh@874 -- # size=4096 00:26:07.446 05:45:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:07.446 05:45:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:07.446 05:45:11 -- common/autotest_common.sh@877 -- # return 0 00:26:07.446 05:45:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:07.446 05:45:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:07.446 05:45:11 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:07.705 05:45:11 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:26:07.705 05:45:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:07.705 05:45:11 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:07.705 05:45:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:07.705 05:45:11 -- bdev/nbd_common.sh@51 -- # local i 00:26:07.705 05:45:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:07.705 05:45:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:07.965 05:45:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:07.965 05:45:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:07.965 05:45:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:07.965 05:45:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:07.965 05:45:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:07.965 05:45:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:07.965 05:45:11 -- bdev/nbd_common.sh@41 -- # break 00:26:07.965 05:45:11 -- bdev/nbd_common.sh@45 -- # return 0 00:26:07.965 05:45:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:07.965 05:45:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:08.225 05:45:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:08.225 05:45:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:08.225 05:45:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:08.225 05:45:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:08.225 05:45:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:08.225 05:45:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:08.225 05:45:12 -- bdev/nbd_common.sh@41 -- # break 00:26:08.225 05:45:12 -- bdev/nbd_common.sh@45 -- # return 0 00:26:08.225 05:45:12 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:26:08.225 05:45:12 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:08.225 05:45:12 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:26:08.225 05:45:12 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:26:08.225 05:45:12 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:08.484 [2024-10-07 05:45:12.355907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:08.484 [2024-10-07 05:45:12.356000] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:08.484 [2024-10-07 05:45:12.356043] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:26:08.484 [2024-10-07 05:45:12.356065] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:08.484 [2024-10-07 05:45:12.358544] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:08.484 [2024-10-07 05:45:12.358626] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:08.484 [2024-10-07 05:45:12.358729] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:08.484 [2024-10-07 05:45:12.358829] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:08.484 BaseBdev1 00:26:08.484 05:45:12 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:08.484 05:45:12 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:26:08.484 05:45:12 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:26:08.743 05:45:12 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:08.743 [2024-10-07 05:45:12.707977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:08.743 [2024-10-07 05:45:12.708034] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:08.743 [2024-10-07 05:45:12.708076] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:26:08.743 [2024-10-07 05:45:12.708097] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:08.743 [2024-10-07 05:45:12.708519] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:08.743 [2024-10-07 05:45:12.708586] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:08.743 [2024-10-07 05:45:12.708688] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:26:08.743 [2024-10-07 05:45:12.708702] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:26:08.743 [2024-10-07 05:45:12.708710] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:08.743 [2024-10-07 05:45:12.708728] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:26:08.743 [2024-10-07 05:45:12.708794] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:08.743 BaseBdev2 00:26:09.002 05:45:12 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:09.002 05:45:12 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:26:09.002 05:45:12 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:26:09.002 05:45:12 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:09.261 [2024-10-07 05:45:13.124087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:09.261 [2024-10-07 05:45:13.124149] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:09.261 [2024-10-07 05:45:13.124178] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:26:09.261 [2024-10-07 05:45:13.124204] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:09.261 [2024-10-07 05:45:13.124634] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:09.261 [2024-10-07 05:45:13.124700] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:09.261 [2024-10-07 05:45:13.124815] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:26:09.261 [2024-10-07 05:45:13.124837] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:09.261 BaseBdev3 00:26:09.261 05:45:13 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:09.261 05:45:13 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:26:09.261 05:45:13 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:26:09.523 05:45:13 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:26:09.781 [2024-10-07 05:45:13.620208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:26:09.781 [2024-10-07 05:45:13.620280] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:09.781 [2024-10-07 05:45:13.620314] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:26:09.781 [2024-10-07 05:45:13.620342] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:09.781 [2024-10-07 05:45:13.620770] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:09.781 [2024-10-07 05:45:13.620831] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:09.781 [2024-10-07 05:45:13.620931] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:26:09.781 [2024-10-07 05:45:13.620955] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:09.781 BaseBdev4 00:26:09.781 05:45:13 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:10.039 05:45:13 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:10.039 [2024-10-07 05:45:13.988266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:10.039 [2024-10-07 05:45:13.988325] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:10.039 [2024-10-07 05:45:13.988354] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:26:10.039 [2024-10-07 05:45:13.988381] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:10.039 [2024-10-07 05:45:13.988819] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:10.039 [2024-10-07 05:45:13.988878] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:10.039 [2024-10-07 05:45:13.988974] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:26:10.039 [2024-10-07 05:45:13.989002] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:10.039 spare 00:26:10.039 05:45:14 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:10.039 05:45:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:10.039 05:45:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:10.039 05:45:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:10.039 05:45:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:10.039 05:45:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:10.039 05:45:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:10.039 05:45:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:10.039 05:45:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:10.039 05:45:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:10.039 05:45:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:10.039 05:45:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:10.298 [2024-10-07 05:45:14.089110] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:26:10.298 [2024-10-07 05:45:14.089135] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:10.298 [2024-10-07 05:45:14.089254] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049510 00:26:10.298 [2024-10-07 05:45:14.094397] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:26:10.298 [2024-10-07 05:45:14.094419] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:26:10.298 [2024-10-07 05:45:14.094592] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:10.298 05:45:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:10.298 "name": "raid_bdev1", 00:26:10.298 "uuid": "3ed51fa4-f8fc-4c40-9471-ca681ce53056", 00:26:10.298 "strip_size_kb": 64, 00:26:10.298 "state": "online", 00:26:10.298 "raid_level": "raid5f", 00:26:10.298 "superblock": true, 00:26:10.298 "num_base_bdevs": 4, 00:26:10.298 "num_base_bdevs_discovered": 4, 00:26:10.298 "num_base_bdevs_operational": 4, 00:26:10.298 "base_bdevs_list": [ 00:26:10.298 { 00:26:10.298 "name": "spare", 00:26:10.298 "uuid": "0fba8366-fcd9-5d21-abd1-7415be62938a", 00:26:10.298 "is_configured": true, 00:26:10.298 "data_offset": 2048, 00:26:10.298 "data_size": 63488 00:26:10.298 }, 00:26:10.298 { 00:26:10.298 "name": "BaseBdev2", 00:26:10.298 "uuid": "dce1cfa5-20b6-5cfd-a55f-6641dc14c438", 00:26:10.298 "is_configured": true, 00:26:10.298 "data_offset": 2048, 00:26:10.298 "data_size": 63488 00:26:10.298 }, 00:26:10.298 { 00:26:10.298 "name": "BaseBdev3", 00:26:10.298 "uuid": "2f1d37ef-060a-56dd-8b5c-f0627c07da1a", 00:26:10.298 "is_configured": true, 00:26:10.298 "data_offset": 2048, 00:26:10.298 "data_size": 63488 00:26:10.298 }, 00:26:10.298 { 00:26:10.298 "name": "BaseBdev4", 00:26:10.298 "uuid": "7f6ac9c7-9adc-5840-a1cf-7ad8911c5ff0", 00:26:10.298 "is_configured": true, 00:26:10.298 "data_offset": 2048, 00:26:10.298 "data_size": 63488 00:26:10.298 } 00:26:10.298 ] 00:26:10.298 }' 00:26:10.298 05:45:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:10.298 05:45:14 -- common/autotest_common.sh@10 -- # set +x 00:26:10.865 05:45:14 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:10.865 05:45:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:10.865 05:45:14 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:10.865 05:45:14 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:10.866 05:45:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:10.866 05:45:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:10.866 05:45:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:11.124 05:45:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:11.125 "name": "raid_bdev1", 00:26:11.125 "uuid": "3ed51fa4-f8fc-4c40-9471-ca681ce53056", 00:26:11.125 "strip_size_kb": 64, 00:26:11.125 "state": "online", 00:26:11.125 "raid_level": "raid5f", 00:26:11.125 "superblock": true, 00:26:11.125 "num_base_bdevs": 4, 00:26:11.125 "num_base_bdevs_discovered": 4, 00:26:11.125 "num_base_bdevs_operational": 4, 00:26:11.125 "base_bdevs_list": [ 00:26:11.125 { 00:26:11.125 "name": "spare", 00:26:11.125 "uuid": "0fba8366-fcd9-5d21-abd1-7415be62938a", 00:26:11.125 "is_configured": true, 00:26:11.125 "data_offset": 2048, 00:26:11.125 "data_size": 63488 00:26:11.125 }, 00:26:11.125 { 00:26:11.125 "name": "BaseBdev2", 00:26:11.125 "uuid": "dce1cfa5-20b6-5cfd-a55f-6641dc14c438", 00:26:11.125 "is_configured": true, 00:26:11.125 "data_offset": 2048, 00:26:11.125 "data_size": 63488 00:26:11.125 }, 00:26:11.125 { 00:26:11.125 "name": "BaseBdev3", 00:26:11.125 "uuid": "2f1d37ef-060a-56dd-8b5c-f0627c07da1a", 00:26:11.125 "is_configured": true, 00:26:11.125 "data_offset": 2048, 00:26:11.125 "data_size": 63488 00:26:11.125 }, 00:26:11.125 { 00:26:11.125 "name": "BaseBdev4", 00:26:11.125 "uuid": "7f6ac9c7-9adc-5840-a1cf-7ad8911c5ff0", 00:26:11.125 "is_configured": true, 00:26:11.125 "data_offset": 2048, 00:26:11.125 "data_size": 63488 00:26:11.125 } 00:26:11.125 ] 00:26:11.125 }' 00:26:11.125 05:45:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:11.384 05:45:15 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:11.384 05:45:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:11.384 05:45:15 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:11.384 05:45:15 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.384 05:45:15 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:11.643 05:45:15 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:26:11.643 05:45:15 -- bdev/bdev_raid.sh@709 -- # killprocess 175329 00:26:11.643 05:45:15 -- common/autotest_common.sh@926 -- # '[' -z 175329 ']' 00:26:11.643 05:45:15 -- common/autotest_common.sh@930 -- # kill -0 175329 00:26:11.643 05:45:15 -- common/autotest_common.sh@931 -- # uname 00:26:11.643 05:45:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:11.643 05:45:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 175329 00:26:11.643 killing process with pid 175329 00:26:11.643 Received shutdown signal, test time was about 60.000000 seconds 00:26:11.643 00:26:11.643 Latency(us) 00:26:11.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.643 =================================================================================================================== 00:26:11.643 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:11.643 05:45:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:11.643 05:45:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:11.643 05:45:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 175329' 00:26:11.643 05:45:15 -- common/autotest_common.sh@945 -- # kill 175329 00:26:11.643 [2024-10-07 05:45:15.447847] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:11.643 05:45:15 -- common/autotest_common.sh@950 -- # wait 175329 00:26:11.643 [2024-10-07 05:45:15.447923] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:11.643 [2024-10-07 05:45:15.447999] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:11.643 [2024-10-07 05:45:15.448010] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:26:11.902 [2024-10-07 05:45:15.786136] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:12.839 ************************************ 00:26:12.839 END TEST raid5f_rebuild_test_sb 00:26:12.839 ************************************ 00:26:12.839 05:45:16 -- bdev/bdev_raid.sh@711 -- # return 0 00:26:12.839 00:26:12.839 real 0m28.876s 00:26:12.839 user 0m43.599s 00:26:12.839 sys 0m3.299s 00:26:12.839 05:45:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:12.839 05:45:16 -- common/autotest_common.sh@10 -- # set +x 00:26:13.098 05:45:16 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:26:13.098 00:26:13.098 real 12m5.972s 00:26:13.098 user 19m56.260s 00:26:13.098 sys 1m35.501s 00:26:13.098 05:45:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:13.098 ************************************ 00:26:13.098 END TEST bdev_raid 00:26:13.098 05:45:16 -- common/autotest_common.sh@10 -- # set +x 00:26:13.098 ************************************ 00:26:13.098 05:45:16 -- spdk/autotest.sh@197 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:26:13.098 05:45:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:13.098 05:45:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:13.098 05:45:16 -- common/autotest_common.sh@10 -- # set +x 00:26:13.098 ************************************ 00:26:13.098 START TEST bdevperf_config 00:26:13.098 ************************************ 00:26:13.098 05:45:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:26:13.098 * Looking for test storage... 00:26:13.098 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:26:13.098 05:45:16 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:26:13.098 05:45:16 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:26:13.098 05:45:16 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:26:13.098 05:45:16 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:13.098 05:45:16 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:13.098 05:45:16 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:26:13.098 05:45:16 -- bdevperf/common.sh@8 -- # local job_section=global 00:26:13.098 05:45:16 -- bdevperf/common.sh@9 -- # local rw=read 00:26:13.098 05:45:16 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:13.098 05:45:16 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:26:13.098 05:45:16 -- bdevperf/common.sh@13 -- # cat 00:26:13.098 05:45:16 -- bdevperf/common.sh@18 -- # job='[global]' 00:26:13.098 00:26:13.098 05:45:16 -- bdevperf/common.sh@19 -- # echo 00:26:13.098 05:45:16 -- bdevperf/common.sh@20 -- # cat 00:26:13.098 05:45:16 -- bdevperf/test_config.sh@18 -- # create_job job0 00:26:13.098 05:45:16 -- bdevperf/common.sh@8 -- # local job_section=job0 00:26:13.098 05:45:16 -- bdevperf/common.sh@9 -- # local rw= 00:26:13.098 05:45:16 -- bdevperf/common.sh@10 -- # local filename= 00:26:13.098 05:45:16 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:26:13.098 05:45:16 -- bdevperf/common.sh@18 -- # job='[job0]' 00:26:13.098 00:26:13.098 05:45:16 -- bdevperf/common.sh@19 -- # echo 00:26:13.098 05:45:16 -- bdevperf/common.sh@20 -- # cat 00:26:13.098 05:45:16 -- bdevperf/test_config.sh@19 -- # create_job job1 00:26:13.099 05:45:16 -- bdevperf/common.sh@8 -- # local job_section=job1 00:26:13.099 05:45:16 -- bdevperf/common.sh@9 -- # local rw= 00:26:13.099 05:45:16 -- bdevperf/common.sh@10 -- # local filename= 00:26:13.099 05:45:16 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:26:13.099 05:45:16 -- bdevperf/common.sh@18 -- # job='[job1]' 00:26:13.099 00:26:13.099 05:45:16 -- bdevperf/common.sh@19 -- # echo 00:26:13.099 05:45:16 -- bdevperf/common.sh@20 -- # cat 00:26:13.099 05:45:17 -- bdevperf/test_config.sh@20 -- # create_job job2 00:26:13.099 05:45:17 -- bdevperf/common.sh@8 -- # local job_section=job2 00:26:13.099 05:45:17 -- bdevperf/common.sh@9 -- # local rw= 00:26:13.099 05:45:17 -- bdevperf/common.sh@10 -- # local filename= 00:26:13.099 05:45:17 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:26:13.099 05:45:17 -- bdevperf/common.sh@18 -- # job='[job2]' 00:26:13.099 00:26:13.099 05:45:17 -- bdevperf/common.sh@19 -- # echo 00:26:13.099 05:45:17 -- bdevperf/common.sh@20 -- # cat 00:26:13.099 05:45:17 -- bdevperf/test_config.sh@21 -- # create_job job3 00:26:13.099 05:45:17 -- bdevperf/common.sh@8 -- # local job_section=job3 00:26:13.099 05:45:17 -- bdevperf/common.sh@9 -- # local rw= 00:26:13.099 05:45:17 -- bdevperf/common.sh@10 -- # local filename= 00:26:13.099 05:45:17 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:26:13.099 05:45:17 -- bdevperf/common.sh@18 -- # job='[job3]' 00:26:13.099 00:26:13.099 05:45:17 -- bdevperf/common.sh@19 -- # echo 00:26:13.099 05:45:17 -- bdevperf/common.sh@20 -- # cat 00:26:13.099 05:45:17 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:17.292 05:45:21 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-10-07 05:45:17.080561] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:17.292 [2024-10-07 05:45:17.080760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176087 ] 00:26:17.292 Using job config with 4 jobs 00:26:17.292 [2024-10-07 05:45:17.250760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.292 [2024-10-07 05:45:17.456110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.292 cpumask for '\''job0'\'' is too big 00:26:17.292 cpumask for '\''job1'\'' is too big 00:26:17.292 cpumask for '\''job2'\'' is too big 00:26:17.292 cpumask for '\''job3'\'' is too big 00:26:17.292 Running I/O for 2 seconds... 00:26:17.292 00:26:17.292 Latency(us) 00:26:17.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.293 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:17.293 Malloc0 : 2.01 33036.59 32.26 0.00 0.00 7740.25 1489.45 11975.21 00:26:17.293 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:17.293 Malloc0 : 2.02 33014.79 32.24 0.00 0.00 7731.87 1385.19 10545.34 00:26:17.293 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:17.293 Malloc0 : 2.02 32993.30 32.22 0.00 0.00 7724.79 1414.98 9770.82 00:26:17.293 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:17.293 Malloc0 : 2.02 32971.88 32.20 0.00 0.00 7716.77 1414.98 10247.45 00:26:17.293 =================================================================================================================== 00:26:17.293 Total : 132016.56 128.92 0.00 0.00 7728.42 1385.19 11975.21' 00:26:17.293 05:45:21 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-10-07 05:45:17.080561] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:17.293 [2024-10-07 05:45:17.080760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176087 ] 00:26:17.293 Using job config with 4 jobs 00:26:17.293 [2024-10-07 05:45:17.250760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.293 [2024-10-07 05:45:17.456110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.293 cpumask for '\''job0'\'' is too big 00:26:17.293 cpumask for '\''job1'\'' is too big 00:26:17.293 cpumask for '\''job2'\'' is too big 00:26:17.293 cpumask for '\''job3'\'' is too big 00:26:17.293 Running I/O for 2 seconds... 00:26:17.293 00:26:17.293 Latency(us) 00:26:17.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.293 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:17.293 Malloc0 : 2.01 33036.59 32.26 0.00 0.00 7740.25 1489.45 11975.21 00:26:17.293 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:17.293 Malloc0 : 2.02 33014.79 32.24 0.00 0.00 7731.87 1385.19 10545.34 00:26:17.293 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:17.293 Malloc0 : 2.02 32993.30 32.22 0.00 0.00 7724.79 1414.98 9770.82 00:26:17.293 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:17.293 Malloc0 : 2.02 32971.88 32.20 0.00 0.00 7716.77 1414.98 10247.45 00:26:17.293 =================================================================================================================== 00:26:17.293 Total : 132016.56 128.92 0.00 0.00 7728.42 1385.19 11975.21' 00:26:17.293 05:45:21 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:26:17.293 05:45:21 -- bdevperf/common.sh@32 -- # echo '[2024-10-07 05:45:17.080561] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:17.293 [2024-10-07 05:45:17.080760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176087 ] 00:26:17.293 Using job config with 4 jobs 00:26:17.293 [2024-10-07 05:45:17.250760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.293 [2024-10-07 05:45:17.456110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.293 cpumask for '\''job0'\'' is too big 00:26:17.293 cpumask for '\''job1'\'' is too big 00:26:17.293 cpumask for '\''job2'\'' is too big 00:26:17.293 cpumask for '\''job3'\'' is too big 00:26:17.293 Running I/O for 2 seconds... 00:26:17.293 00:26:17.293 Latency(us) 00:26:17.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.293 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:17.293 Malloc0 : 2.01 33036.59 32.26 0.00 0.00 7740.25 1489.45 11975.21 00:26:17.293 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:17.293 Malloc0 : 2.02 33014.79 32.24 0.00 0.00 7731.87 1385.19 10545.34 00:26:17.293 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:17.293 Malloc0 : 2.02 32993.30 32.22 0.00 0.00 7724.79 1414.98 9770.82 00:26:17.293 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:17.293 Malloc0 : 2.02 32971.88 32.20 0.00 0.00 7716.77 1414.98 10247.45 00:26:17.293 =================================================================================================================== 00:26:17.293 Total : 132016.56 128.92 0.00 0.00 7728.42 1385.19 11975.21' 00:26:17.293 05:45:21 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:26:17.293 05:45:21 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:26:17.293 05:45:21 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:17.293 [2024-10-07 05:45:21.234898] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:17.293 [2024-10-07 05:45:21.235121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176148 ] 00:26:17.552 [2024-10-07 05:45:21.410783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.811 [2024-10-07 05:45:21.650403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.380 cpumask for 'job0' is too big 00:26:18.380 cpumask for 'job1' is too big 00:26:18.380 cpumask for 'job2' is too big 00:26:18.380 cpumask for 'job3' is too big 00:26:21.671 05:45:25 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:26:21.671 Running I/O for 2 seconds... 00:26:21.671 00:26:21.671 Latency(us) 00:26:21.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.671 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:21.671 Malloc0 : 2.01 32485.66 31.72 0.00 0.00 7871.52 1482.01 12094.37 00:26:21.671 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:21.671 Malloc0 : 2.02 32495.94 31.73 0.00 0.00 7856.32 1362.85 10724.07 00:26:21.671 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:21.671 Malloc0 : 2.02 32474.56 31.71 0.00 0.00 7847.52 1414.98 9294.20 00:26:21.671 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:21.671 Malloc0 : 2.02 32452.63 31.69 0.00 0.00 7839.25 1422.43 9353.77 00:26:21.671 =================================================================================================================== 00:26:21.671 Total : 129908.79 126.86 0.00 0.00 7853.64 1362.85 12094.37' 00:26:21.671 05:45:25 -- bdevperf/test_config.sh@27 -- # cleanup 00:26:21.671 05:45:25 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:21.671 05:45:25 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:26:21.671 05:45:25 -- bdevperf/common.sh@8 -- # local job_section=job0 00:26:21.671 05:45:25 -- bdevperf/common.sh@9 -- # local rw=write 00:26:21.671 05:45:25 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:21.671 05:45:25 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:26:21.671 05:45:25 -- bdevperf/common.sh@18 -- # job='[job0]' 00:26:21.671 00:26:21.671 05:45:25 -- bdevperf/common.sh@19 -- # echo 00:26:21.671 05:45:25 -- bdevperf/common.sh@20 -- # cat 00:26:21.671 05:45:25 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:26:21.671 05:45:25 -- bdevperf/common.sh@8 -- # local job_section=job1 00:26:21.671 05:45:25 -- bdevperf/common.sh@9 -- # local rw=write 00:26:21.671 05:45:25 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:21.671 05:45:25 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:26:21.671 05:45:25 -- bdevperf/common.sh@18 -- # job='[job1]' 00:26:21.671 00:26:21.671 05:45:25 -- bdevperf/common.sh@19 -- # echo 00:26:21.671 05:45:25 -- bdevperf/common.sh@20 -- # cat 00:26:21.671 05:45:25 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:26:21.671 05:45:25 -- bdevperf/common.sh@8 -- # local job_section=job2 00:26:21.671 05:45:25 -- bdevperf/common.sh@9 -- # local rw=write 00:26:21.671 05:45:25 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:21.671 05:45:25 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:26:21.671 05:45:25 -- bdevperf/common.sh@18 -- # job='[job2]' 00:26:21.671 00:26:21.671 05:45:25 -- bdevperf/common.sh@19 -- # echo 00:26:21.671 05:45:25 -- bdevperf/common.sh@20 -- # cat 00:26:21.671 05:45:25 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:25.906 05:45:29 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-10-07 05:45:25.465230] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:25.906 [2024-10-07 05:45:25.465439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176204 ] 00:26:25.906 Using job config with 3 jobs 00:26:25.906 [2024-10-07 05:45:25.637307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.906 [2024-10-07 05:45:25.846210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.906 cpumask for '\''job0'\'' is too big 00:26:25.906 cpumask for '\''job1'\'' is too big 00:26:25.906 cpumask for '\''job2'\'' is too big 00:26:25.906 Running I/O for 2 seconds... 00:26:25.906 00:26:25.906 Latency(us) 00:26:25.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.906 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:25.906 Malloc0 : 2.01 44052.08 43.02 0.00 0.00 5806.44 1444.77 9234.62 00:26:25.906 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:25.906 Malloc0 : 2.01 44021.91 42.99 0.00 0.00 5799.50 1355.40 7685.59 00:26:25.906 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:25.906 Malloc0 : 2.01 43990.06 42.96 0.00 0.00 5793.32 1452.22 6672.76 00:26:25.906 =================================================================================================================== 00:26:25.906 Total : 132064.04 128.97 0.00 0.00 5799.76 1355.40 9234.62' 00:26:25.906 05:45:29 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-10-07 05:45:25.465230] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:25.906 [2024-10-07 05:45:25.465439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176204 ] 00:26:25.906 Using job config with 3 jobs 00:26:25.906 [2024-10-07 05:45:25.637307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.906 [2024-10-07 05:45:25.846210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.906 cpumask for '\''job0'\'' is too big 00:26:25.906 cpumask for '\''job1'\'' is too big 00:26:25.906 cpumask for '\''job2'\'' is too big 00:26:25.906 Running I/O for 2 seconds... 00:26:25.906 00:26:25.906 Latency(us) 00:26:25.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.906 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:25.906 Malloc0 : 2.01 44052.08 43.02 0.00 0.00 5806.44 1444.77 9234.62 00:26:25.906 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:25.906 Malloc0 : 2.01 44021.91 42.99 0.00 0.00 5799.50 1355.40 7685.59 00:26:25.906 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:25.906 Malloc0 : 2.01 43990.06 42.96 0.00 0.00 5793.32 1452.22 6672.76 00:26:25.906 =================================================================================================================== 00:26:25.906 Total : 132064.04 128.97 0.00 0.00 5799.76 1355.40 9234.62' 00:26:25.906 05:45:29 -- bdevperf/common.sh@32 -- # echo '[2024-10-07 05:45:25.465230] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:25.906 [2024-10-07 05:45:25.465439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176204 ] 00:26:25.906 Using job config with 3 jobs 00:26:25.906 [2024-10-07 05:45:25.637307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.906 [2024-10-07 05:45:25.846210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.906 cpumask for '\''job0'\'' is too big 00:26:25.906 cpumask for '\''job1'\'' is too big 00:26:25.906 cpumask for '\''job2'\'' is too big 00:26:25.906 Running I/O for 2 seconds... 00:26:25.906 00:26:25.906 Latency(us) 00:26:25.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.906 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:25.906 Malloc0 : 2.01 44052.08 43.02 0.00 0.00 5806.44 1444.77 9234.62 00:26:25.906 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:25.906 Malloc0 : 2.01 44021.91 42.99 0.00 0.00 5799.50 1355.40 7685.59 00:26:25.906 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:25.906 Malloc0 : 2.01 43990.06 42.96 0.00 0.00 5793.32 1452.22 6672.76 00:26:25.906 =================================================================================================================== 00:26:25.906 Total : 132064.04 128.97 0.00 0.00 5799.76 1355.40 9234.62' 00:26:25.906 05:45:29 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:26:25.906 05:45:29 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:26:25.906 05:45:29 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:26:25.906 05:45:29 -- bdevperf/test_config.sh@35 -- # cleanup 00:26:25.906 05:45:29 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:25.906 05:45:29 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:26:25.906 05:45:29 -- bdevperf/common.sh@8 -- # local job_section=global 00:26:25.906 05:45:29 -- bdevperf/common.sh@9 -- # local rw=rw 00:26:25.906 05:45:29 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:26:25.906 05:45:29 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:26:25.906 05:45:29 -- bdevperf/common.sh@13 -- # cat 00:26:25.906 05:45:29 -- bdevperf/common.sh@18 -- # job='[global]' 00:26:25.906 00:26:25.906 05:45:29 -- bdevperf/common.sh@19 -- # echo 00:26:25.906 05:45:29 -- bdevperf/common.sh@20 -- # cat 00:26:25.906 05:45:29 -- bdevperf/test_config.sh@38 -- # create_job job0 00:26:25.906 05:45:29 -- bdevperf/common.sh@8 -- # local job_section=job0 00:26:25.906 05:45:29 -- bdevperf/common.sh@9 -- # local rw= 00:26:25.906 05:45:29 -- bdevperf/common.sh@10 -- # local filename= 00:26:25.906 05:45:29 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:26:25.906 05:45:29 -- bdevperf/common.sh@18 -- # job='[job0]' 00:26:25.906 00:26:25.906 05:45:29 -- bdevperf/common.sh@19 -- # echo 00:26:25.906 05:45:29 -- bdevperf/common.sh@20 -- # cat 00:26:25.906 05:45:29 -- bdevperf/test_config.sh@39 -- # create_job job1 00:26:25.906 05:45:29 -- bdevperf/common.sh@8 -- # local job_section=job1 00:26:25.906 05:45:29 -- bdevperf/common.sh@9 -- # local rw= 00:26:25.906 05:45:29 -- bdevperf/common.sh@10 -- # local filename= 00:26:25.906 05:45:29 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:26:25.906 05:45:29 -- bdevperf/common.sh@18 -- # job='[job1]' 00:26:25.906 00:26:25.906 05:45:29 -- bdevperf/common.sh@19 -- # echo 00:26:25.906 05:45:29 -- bdevperf/common.sh@20 -- # cat 00:26:25.906 05:45:29 -- bdevperf/test_config.sh@40 -- # create_job job2 00:26:25.906 05:45:29 -- bdevperf/common.sh@8 -- # local job_section=job2 00:26:25.906 05:45:29 -- bdevperf/common.sh@9 -- # local rw= 00:26:25.906 05:45:29 -- bdevperf/common.sh@10 -- # local filename= 00:26:25.906 05:45:29 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:26:25.906 05:45:29 -- bdevperf/common.sh@18 -- # job='[job2]' 00:26:25.906 00:26:25.906 05:45:29 -- bdevperf/common.sh@19 -- # echo 00:26:25.906 05:45:29 -- bdevperf/common.sh@20 -- # cat 00:26:25.906 05:45:29 -- bdevperf/test_config.sh@41 -- # create_job job3 00:26:25.906 05:45:29 -- bdevperf/common.sh@8 -- # local job_section=job3 00:26:25.906 05:45:29 -- bdevperf/common.sh@9 -- # local rw= 00:26:25.906 05:45:29 -- bdevperf/common.sh@10 -- # local filename= 00:26:25.906 05:45:29 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:26:25.906 05:45:29 -- bdevperf/common.sh@18 -- # job='[job3]' 00:26:25.906 00:26:25.906 05:45:29 -- bdevperf/common.sh@19 -- # echo 00:26:25.906 05:45:29 -- bdevperf/common.sh@20 -- # cat 00:26:25.906 05:45:29 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:30.103 05:45:33 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-10-07 05:45:29.636812] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:30.103 [2024-10-07 05:45:29.636994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176263 ] 00:26:30.103 Using job config with 4 jobs 00:26:30.103 [2024-10-07 05:45:29.790714] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.103 [2024-10-07 05:45:29.996099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.103 cpumask for '\''job0'\'' is too big 00:26:30.103 cpumask for '\''job1'\'' is too big 00:26:30.103 cpumask for '\''job2'\'' is too big 00:26:30.104 cpumask for '\''job3'\'' is too big 00:26:30.104 Running I/O for 2 seconds... 00:26:30.104 00:26:30.104 Latency(us) 00:26:30.104 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.104 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:30.104 Malloc0 : 2.03 16255.59 15.87 0.00 0.00 15741.71 3276.80 26333.56 00:26:30.104 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:30.104 Malloc1 : 2.03 16244.52 15.86 0.00 0.00 15738.58 4230.05 25856.93 00:26:30.104 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:30.104 Malloc0 : 2.03 16234.08 15.85 0.00 0.00 15697.39 3291.69 22163.08 00:26:30.104 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:30.104 Malloc1 : 2.04 16223.38 15.84 0.00 0.00 15695.30 3678.95 22043.93 00:26:30.104 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:30.104 Malloc0 : 2.04 16212.93 15.83 0.00 0.00 15665.63 3023.59 18945.86 00:26:30.104 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:30.104 Malloc1 : 2.04 16202.20 15.82 0.00 0.00 15663.88 3470.43 18945.86 00:26:30.104 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:30.104 Malloc0 : 2.04 16191.93 15.81 0.00 0.00 15628.88 3023.59 17992.61 00:26:30.104 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:30.104 Malloc1 : 2.04 16181.19 15.80 0.00 0.00 15627.81 3425.75 18111.77 00:26:30.104 =================================================================================================================== 00:26:30.104 Total : 129745.81 126.70 0.00 0.00 15682.40 3023.59 26333.56' 00:26:30.104 05:45:33 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-10-07 05:45:29.636812] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:30.104 [2024-10-07 05:45:29.636994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176263 ] 00:26:30.104 Using job config with 4 jobs 00:26:30.104 [2024-10-07 05:45:29.790714] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.104 [2024-10-07 05:45:29.996099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.104 cpumask for '\''job0'\'' is too big 00:26:30.104 cpumask for '\''job1'\'' is too big 00:26:30.104 cpumask for '\''job2'\'' is too big 00:26:30.104 cpumask for '\''job3'\'' is too big 00:26:30.104 Running I/O for 2 seconds... 00:26:30.104 00:26:30.104 Latency(us) 00:26:30.104 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.104 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:30.104 Malloc0 : 2.03 16255.59 15.87 0.00 0.00 15741.71 3276.80 26333.56 00:26:30.104 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:30.104 Malloc1 : 2.03 16244.52 15.86 0.00 0.00 15738.58 4230.05 25856.93 00:26:30.104 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:30.104 Malloc0 : 2.03 16234.08 15.85 0.00 0.00 15697.39 3291.69 22163.08 00:26:30.104 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:30.104 Malloc1 : 2.04 16223.38 15.84 0.00 0.00 15695.30 3678.95 22043.93 00:26:30.104 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:30.104 Malloc0 : 2.04 16212.93 15.83 0.00 0.00 15665.63 3023.59 18945.86 00:26:30.104 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:30.104 Malloc1 : 2.04 16202.20 15.82 0.00 0.00 15663.88 3470.43 18945.86 00:26:30.104 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:30.104 Malloc0 : 2.04 16191.93 15.81 0.00 0.00 15628.88 3023.59 17992.61 00:26:30.104 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:30.104 Malloc1 : 2.04 16181.19 15.80 0.00 0.00 15627.81 3425.75 18111.77 00:26:30.104 =================================================================================================================== 00:26:30.104 Total : 129745.81 126.70 0.00 0.00 15682.40 3023.59 26333.56' 00:26:30.104 05:45:33 -- bdevperf/common.sh@32 -- # echo '[2024-10-07 05:45:29.636812] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:30.104 [2024-10-07 05:45:29.636994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176263 ] 00:26:30.104 Using job config with 4 jobs 00:26:30.104 [2024-10-07 05:45:29.790714] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.104 [2024-10-07 05:45:29.996099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.104 cpumask for '\''job0'\'' is too big 00:26:30.104 cpumask for '\''job1'\'' is too big 00:26:30.104 cpumask for '\''job2'\'' is too big 00:26:30.104 cpumask for '\''job3'\'' is too big 00:26:30.104 Running I/O for 2 seconds... 00:26:30.104 00:26:30.104 Latency(us) 00:26:30.104 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.104 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:30.104 Malloc0 : 2.03 16255.59 15.87 0.00 0.00 15741.71 3276.80 26333.56 00:26:30.104 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:30.104 Malloc1 : 2.03 16244.52 15.86 0.00 0.00 15738.58 4230.05 25856.93 00:26:30.104 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:30.104 Malloc0 : 2.03 16234.08 15.85 0.00 0.00 15697.39 3291.69 22163.08 00:26:30.104 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:30.104 Malloc1 : 2.04 16223.38 15.84 0.00 0.00 15695.30 3678.95 22043.93 00:26:30.104 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:30.104 Malloc0 : 2.04 16212.93 15.83 0.00 0.00 15665.63 3023.59 18945.86 00:26:30.104 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:30.104 Malloc1 : 2.04 16202.20 15.82 0.00 0.00 15663.88 3470.43 18945.86 00:26:30.104 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:30.104 Malloc0 : 2.04 16191.93 15.81 0.00 0.00 15628.88 3023.59 17992.61 00:26:30.104 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:30.104 Malloc1 : 2.04 16181.19 15.80 0.00 0.00 15627.81 3425.75 18111.77 00:26:30.104 =================================================================================================================== 00:26:30.104 Total : 129745.81 126.70 0.00 0.00 15682.40 3023.59 26333.56' 00:26:30.104 05:45:33 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:26:30.104 05:45:33 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:26:30.104 05:45:33 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:26:30.104 05:45:33 -- bdevperf/test_config.sh@44 -- # cleanup 00:26:30.104 05:45:33 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:30.104 05:45:33 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:30.104 00:26:30.104 real 0m16.836s 00:26:30.104 user 0m14.823s 00:26:30.104 sys 0m1.459s 00:26:30.104 05:45:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:30.104 05:45:33 -- common/autotest_common.sh@10 -- # set +x 00:26:30.104 ************************************ 00:26:30.104 END TEST bdevperf_config 00:26:30.104 ************************************ 00:26:30.104 05:45:33 -- spdk/autotest.sh@198 -- # uname -s 00:26:30.104 05:45:33 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:26:30.104 05:45:33 -- spdk/autotest.sh@199 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:26:30.104 05:45:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:30.104 05:45:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:30.104 05:45:33 -- common/autotest_common.sh@10 -- # set +x 00:26:30.104 ************************************ 00:26:30.104 START TEST reactor_set_interrupt 00:26:30.104 ************************************ 00:26:30.104 05:45:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:26:30.104 * Looking for test storage... 00:26:30.104 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:30.104 05:45:33 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:26:30.104 05:45:33 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:26:30.104 05:45:33 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:30.104 05:45:33 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:30.104 05:45:33 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:26:30.104 05:45:33 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:30.104 05:45:33 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:26:30.104 05:45:33 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:26:30.104 05:45:33 -- common/autotest_common.sh@34 -- # set -e 00:26:30.104 05:45:33 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:26:30.104 05:45:33 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:26:30.105 05:45:33 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:26:30.105 05:45:33 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:26:30.105 05:45:33 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:26:30.105 05:45:33 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:26:30.105 05:45:33 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:26:30.105 05:45:33 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:26:30.105 05:45:33 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:26:30.105 05:45:33 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:26:30.105 05:45:33 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:26:30.105 05:45:33 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:26:30.105 05:45:33 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:26:30.105 05:45:33 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:26:30.105 05:45:33 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:26:30.105 05:45:33 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:26:30.105 05:45:33 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:26:30.105 05:45:33 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:26:30.105 05:45:33 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:26:30.105 05:45:33 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:26:30.105 05:45:33 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:26:30.105 05:45:33 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:26:30.105 05:45:33 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:30.105 05:45:33 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:26:30.105 05:45:33 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:26:30.105 05:45:33 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:26:30.105 05:45:33 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:26:30.105 05:45:33 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:26:30.105 05:45:33 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:26:30.105 05:45:33 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:26:30.105 05:45:33 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:26:30.105 05:45:33 -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:26:30.105 05:45:33 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:26:30.105 05:45:33 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:26:30.105 05:45:33 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:26:30.105 05:45:33 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:26:30.105 05:45:33 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:26:30.105 05:45:33 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:26:30.105 05:45:33 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:26:30.105 05:45:33 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:26:30.105 05:45:33 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:26:30.105 05:45:33 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:26:30.105 05:45:33 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:26:30.105 05:45:33 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:26:30.105 05:45:33 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:26:30.105 05:45:33 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:26:30.105 05:45:33 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:26:30.105 05:45:33 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:26:30.105 05:45:33 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:26:30.105 05:45:33 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:26:30.105 05:45:33 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:26:30.105 05:45:33 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:26:30.105 05:45:33 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:26:30.105 05:45:33 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:26:30.105 05:45:33 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:26:30.105 05:45:33 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:26:30.105 05:45:33 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:26:30.105 05:45:33 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:26:30.105 05:45:33 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:26:30.105 05:45:33 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:26:30.105 05:45:33 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:26:30.105 05:45:33 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:26:30.105 05:45:33 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:26:30.105 05:45:33 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:26:30.105 05:45:33 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:26:30.105 05:45:33 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:26:30.105 05:45:33 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:26:30.105 05:45:33 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:26:30.105 05:45:33 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:26:30.105 05:45:33 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:26:30.105 05:45:33 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:26:30.105 05:45:33 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:26:30.105 05:45:33 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:26:30.105 05:45:33 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:26:30.105 05:45:33 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:26:30.105 05:45:33 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:26:30.105 05:45:33 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:26:30.105 05:45:33 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:26:30.105 05:45:33 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:26:30.105 05:45:33 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:26:30.105 05:45:33 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:26:30.105 05:45:33 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:26:30.105 05:45:33 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:26:30.105 05:45:33 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:30.105 05:45:33 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:30.105 05:45:33 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:26:30.105 05:45:33 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:26:30.105 05:45:33 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:26:30.105 05:45:33 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:26:30.105 05:45:33 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:26:30.105 05:45:33 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:26:30.105 05:45:33 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:26:30.105 05:45:33 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:26:30.105 05:45:33 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:26:30.105 05:45:33 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:26:30.105 05:45:33 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:26:30.105 05:45:33 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:26:30.105 05:45:33 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:26:30.105 05:45:33 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:26:30.105 #define SPDK_CONFIG_H 00:26:30.105 #define SPDK_CONFIG_APPS 1 00:26:30.105 #define SPDK_CONFIG_ARCH native 00:26:30.105 #define SPDK_CONFIG_ASAN 1 00:26:30.105 #undef SPDK_CONFIG_AVAHI 00:26:30.105 #undef SPDK_CONFIG_CET 00:26:30.105 #define SPDK_CONFIG_COVERAGE 1 00:26:30.105 #define SPDK_CONFIG_CROSS_PREFIX 00:26:30.105 #undef SPDK_CONFIG_CRYPTO 00:26:30.105 #undef SPDK_CONFIG_CRYPTO_MLX5 00:26:30.105 #undef SPDK_CONFIG_CUSTOMOCF 00:26:30.105 #undef SPDK_CONFIG_DAOS 00:26:30.105 #define SPDK_CONFIG_DAOS_DIR 00:26:30.105 #define SPDK_CONFIG_DEBUG 1 00:26:30.105 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:26:30.105 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:26:30.105 #define SPDK_CONFIG_DPDK_INC_DIR 00:26:30.105 #define SPDK_CONFIG_DPDK_LIB_DIR 00:26:30.105 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:26:30.105 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:30.105 #define SPDK_CONFIG_EXAMPLES 1 00:26:30.105 #undef SPDK_CONFIG_FC 00:26:30.105 #define SPDK_CONFIG_FC_PATH 00:26:30.105 #define SPDK_CONFIG_FIO_PLUGIN 1 00:26:30.105 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:26:30.105 #undef SPDK_CONFIG_FUSE 00:26:30.105 #undef SPDK_CONFIG_FUZZER 00:26:30.105 #define SPDK_CONFIG_FUZZER_LIB 00:26:30.105 #undef SPDK_CONFIG_GOLANG 00:26:30.105 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:26:30.105 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:26:30.105 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:26:30.105 #undef SPDK_CONFIG_HAVE_LIBBSD 00:26:30.105 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:26:30.105 #define SPDK_CONFIG_IDXD 1 00:26:30.105 #undef SPDK_CONFIG_IDXD_KERNEL 00:26:30.105 #undef SPDK_CONFIG_IPSEC_MB 00:26:30.105 #define SPDK_CONFIG_IPSEC_MB_DIR 00:26:30.105 #define SPDK_CONFIG_ISAL 1 00:26:30.105 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:26:30.105 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:26:30.105 #define SPDK_CONFIG_LIBDIR 00:26:30.105 #undef SPDK_CONFIG_LTO 00:26:30.105 #define SPDK_CONFIG_MAX_LCORES 00:26:30.105 #define SPDK_CONFIG_NVME_CUSE 1 00:26:30.105 #undef SPDK_CONFIG_OCF 00:26:30.105 #define SPDK_CONFIG_OCF_PATH 00:26:30.105 #define SPDK_CONFIG_OPENSSL_PATH 00:26:30.105 #undef SPDK_CONFIG_PGO_CAPTURE 00:26:30.105 #undef SPDK_CONFIG_PGO_USE 00:26:30.105 #define SPDK_CONFIG_PREFIX /usr/local 00:26:30.105 #define SPDK_CONFIG_RAID5F 1 00:26:30.105 #undef SPDK_CONFIG_RBD 00:26:30.105 #define SPDK_CONFIG_RDMA 1 00:26:30.105 #define SPDK_CONFIG_RDMA_PROV verbs 00:26:30.105 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:26:30.105 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:26:30.105 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:26:30.105 #undef SPDK_CONFIG_SHARED 00:26:30.105 #undef SPDK_CONFIG_SMA 00:26:30.105 #define SPDK_CONFIG_TESTS 1 00:26:30.105 #undef SPDK_CONFIG_TSAN 00:26:30.105 #undef SPDK_CONFIG_UBLK 00:26:30.105 #define SPDK_CONFIG_UBSAN 1 00:26:30.105 #define SPDK_CONFIG_UNIT_TESTS 1 00:26:30.105 #undef SPDK_CONFIG_URING 00:26:30.105 #define SPDK_CONFIG_URING_PATH 00:26:30.106 #undef SPDK_CONFIG_URING_ZNS 00:26:30.106 #undef SPDK_CONFIG_USDT 00:26:30.106 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:26:30.106 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:26:30.106 #undef SPDK_CONFIG_VFIO_USER 00:26:30.106 #define SPDK_CONFIG_VFIO_USER_DIR 00:26:30.106 #define SPDK_CONFIG_VHOST 1 00:26:30.106 #define SPDK_CONFIG_VIRTIO 1 00:26:30.106 #undef SPDK_CONFIG_VTUNE 00:26:30.106 #define SPDK_CONFIG_VTUNE_DIR 00:26:30.106 #define SPDK_CONFIG_WERROR 1 00:26:30.106 #define SPDK_CONFIG_WPDK_DIR 00:26:30.106 #undef SPDK_CONFIG_XNVME 00:26:30.106 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:26:30.106 05:45:33 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:26:30.106 05:45:33 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:30.106 05:45:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:30.106 05:45:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:30.106 05:45:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:30.106 05:45:33 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:30.106 05:45:33 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:30.106 05:45:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:30.106 05:45:33 -- paths/export.sh@5 -- # export PATH 00:26:30.106 05:45:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:30.106 05:45:33 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:30.106 05:45:33 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:30.106 05:45:33 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:30.106 05:45:33 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:30.106 05:45:33 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:26:30.106 05:45:33 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:26:30.106 05:45:33 -- pm/common@16 -- # TEST_TAG=N/A 00:26:30.106 05:45:33 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:26:30.106 05:45:33 -- common/autotest_common.sh@52 -- # : 1 00:26:30.106 05:45:33 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:26:30.106 05:45:33 -- common/autotest_common.sh@56 -- # : 0 00:26:30.106 05:45:33 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:26:30.106 05:45:33 -- common/autotest_common.sh@58 -- # : 0 00:26:30.106 05:45:33 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:26:30.106 05:45:33 -- common/autotest_common.sh@60 -- # : 1 00:26:30.106 05:45:33 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:26:30.106 05:45:33 -- common/autotest_common.sh@62 -- # : 1 00:26:30.106 05:45:33 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:26:30.106 05:45:33 -- common/autotest_common.sh@64 -- # : 00:26:30.106 05:45:33 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:26:30.106 05:45:33 -- common/autotest_common.sh@66 -- # : 0 00:26:30.106 05:45:33 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:26:30.106 05:45:33 -- common/autotest_common.sh@68 -- # : 0 00:26:30.106 05:45:33 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:26:30.106 05:45:33 -- common/autotest_common.sh@70 -- # : 0 00:26:30.106 05:45:33 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:26:30.106 05:45:33 -- common/autotest_common.sh@72 -- # : 0 00:26:30.106 05:45:33 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:26:30.106 05:45:33 -- common/autotest_common.sh@74 -- # : 1 00:26:30.106 05:45:33 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:26:30.106 05:45:33 -- common/autotest_common.sh@76 -- # : 0 00:26:30.106 05:45:33 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:26:30.106 05:45:33 -- common/autotest_common.sh@78 -- # : 0 00:26:30.106 05:45:33 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:26:30.106 05:45:33 -- common/autotest_common.sh@80 -- # : 0 00:26:30.106 05:45:33 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:26:30.106 05:45:33 -- common/autotest_common.sh@82 -- # : 0 00:26:30.106 05:45:33 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:26:30.106 05:45:33 -- common/autotest_common.sh@84 -- # : 0 00:26:30.106 05:45:33 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:26:30.106 05:45:33 -- common/autotest_common.sh@86 -- # : 0 00:26:30.106 05:45:33 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:26:30.106 05:45:33 -- common/autotest_common.sh@88 -- # : 0 00:26:30.106 05:45:33 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:26:30.106 05:45:33 -- common/autotest_common.sh@90 -- # : 0 00:26:30.106 05:45:33 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:26:30.106 05:45:33 -- common/autotest_common.sh@92 -- # : 0 00:26:30.106 05:45:33 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:26:30.106 05:45:33 -- common/autotest_common.sh@94 -- # : 0 00:26:30.106 05:45:33 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:26:30.106 05:45:33 -- common/autotest_common.sh@96 -- # : rdma 00:26:30.106 05:45:33 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:26:30.106 05:45:33 -- common/autotest_common.sh@98 -- # : 0 00:26:30.106 05:45:33 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:26:30.106 05:45:33 -- common/autotest_common.sh@100 -- # : 0 00:26:30.106 05:45:33 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:26:30.106 05:45:33 -- common/autotest_common.sh@102 -- # : 1 00:26:30.106 05:45:33 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:26:30.106 05:45:33 -- common/autotest_common.sh@104 -- # : 0 00:26:30.106 05:45:33 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:26:30.106 05:45:33 -- common/autotest_common.sh@106 -- # : 0 00:26:30.106 05:45:33 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:26:30.106 05:45:33 -- common/autotest_common.sh@108 -- # : 0 00:26:30.106 05:45:33 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:26:30.106 05:45:33 -- common/autotest_common.sh@110 -- # : 0 00:26:30.106 05:45:33 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:26:30.106 05:45:33 -- common/autotest_common.sh@112 -- # : 0 00:26:30.106 05:45:33 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:26:30.106 05:45:33 -- common/autotest_common.sh@114 -- # : 1 00:26:30.106 05:45:33 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:26:30.106 05:45:33 -- common/autotest_common.sh@116 -- # : 1 00:26:30.106 05:45:33 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:26:30.106 05:45:33 -- common/autotest_common.sh@118 -- # : 00:26:30.106 05:45:33 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:26:30.106 05:45:34 -- common/autotest_common.sh@120 -- # : 0 00:26:30.106 05:45:34 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:26:30.106 05:45:34 -- common/autotest_common.sh@122 -- # : 0 00:26:30.106 05:45:34 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:26:30.106 05:45:34 -- common/autotest_common.sh@124 -- # : 0 00:26:30.106 05:45:34 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:26:30.106 05:45:34 -- common/autotest_common.sh@126 -- # : 0 00:26:30.106 05:45:34 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:26:30.106 05:45:34 -- common/autotest_common.sh@128 -- # : 0 00:26:30.106 05:45:34 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:26:30.106 05:45:34 -- common/autotest_common.sh@130 -- # : 0 00:26:30.106 05:45:34 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:26:30.106 05:45:34 -- common/autotest_common.sh@132 -- # : 00:26:30.106 05:45:34 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:26:30.106 05:45:34 -- common/autotest_common.sh@134 -- # : true 00:26:30.106 05:45:34 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:26:30.106 05:45:34 -- common/autotest_common.sh@136 -- # : 1 00:26:30.106 05:45:34 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:26:30.106 05:45:34 -- common/autotest_common.sh@138 -- # : 0 00:26:30.106 05:45:34 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:26:30.106 05:45:34 -- common/autotest_common.sh@140 -- # : 0 00:26:30.106 05:45:34 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:26:30.106 05:45:34 -- common/autotest_common.sh@142 -- # : 0 00:26:30.106 05:45:34 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:26:30.106 05:45:34 -- common/autotest_common.sh@144 -- # : 0 00:26:30.106 05:45:34 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:26:30.106 05:45:34 -- common/autotest_common.sh@146 -- # : 0 00:26:30.106 05:45:34 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:26:30.106 05:45:34 -- common/autotest_common.sh@148 -- # : 00:26:30.106 05:45:34 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:26:30.106 05:45:34 -- common/autotest_common.sh@150 -- # : 0 00:26:30.106 05:45:34 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:26:30.106 05:45:34 -- common/autotest_common.sh@152 -- # : 0 00:26:30.106 05:45:34 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:26:30.106 05:45:34 -- common/autotest_common.sh@154 -- # : 0 00:26:30.106 05:45:34 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:26:30.106 05:45:34 -- common/autotest_common.sh@156 -- # : 0 00:26:30.106 05:45:34 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:26:30.106 05:45:34 -- common/autotest_common.sh@158 -- # : 0 00:26:30.106 05:45:34 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:26:30.106 05:45:34 -- common/autotest_common.sh@160 -- # : 0 00:26:30.106 05:45:34 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:26:30.107 05:45:34 -- common/autotest_common.sh@163 -- # : 00:26:30.107 05:45:34 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:26:30.107 05:45:34 -- common/autotest_common.sh@165 -- # : 0 00:26:30.107 05:45:34 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:26:30.107 05:45:34 -- common/autotest_common.sh@167 -- # : 0 00:26:30.107 05:45:34 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:26:30.107 05:45:34 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:30.107 05:45:34 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:30.107 05:45:34 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:26:30.107 05:45:34 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:26:30.107 05:45:34 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:30.107 05:45:34 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:30.107 05:45:34 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:30.107 05:45:34 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:30.107 05:45:34 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:26:30.107 05:45:34 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:26:30.107 05:45:34 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:30.107 05:45:34 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:30.107 05:45:34 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:26:30.107 05:45:34 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:26:30.107 05:45:34 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:30.107 05:45:34 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:30.107 05:45:34 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:30.107 05:45:34 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:30.107 05:45:34 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:26:30.107 05:45:34 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:26:30.107 05:45:34 -- common/autotest_common.sh@196 -- # cat 00:26:30.107 05:45:34 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:26:30.107 05:45:34 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:30.107 05:45:34 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:30.107 05:45:34 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:30.107 05:45:34 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:30.107 05:45:34 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:26:30.107 05:45:34 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:26:30.107 05:45:34 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:30.107 05:45:34 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:30.107 05:45:34 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:30.107 05:45:34 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:30.107 05:45:34 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:26:30.107 05:45:34 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:26:30.107 05:45:34 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:30.107 05:45:34 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:30.107 05:45:34 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:30.107 05:45:34 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:30.107 05:45:34 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:30.107 05:45:34 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:30.107 05:45:34 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:26:30.107 05:45:34 -- common/autotest_common.sh@249 -- # export valgrind= 00:26:30.107 05:45:34 -- common/autotest_common.sh@249 -- # valgrind= 00:26:30.107 05:45:34 -- common/autotest_common.sh@255 -- # uname -s 00:26:30.107 05:45:34 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:26:30.107 05:45:34 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:26:30.107 05:45:34 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:26:30.107 05:45:34 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:26:30.107 05:45:34 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:26:30.107 05:45:34 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:26:30.107 05:45:34 -- common/autotest_common.sh@265 -- # MAKE=make 00:26:30.107 05:45:34 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:26:30.107 05:45:34 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:26:30.107 05:45:34 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:26:30.107 05:45:34 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:26:30.107 05:45:34 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:26:30.107 05:45:34 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:26:30.107 05:45:34 -- common/autotest_common.sh@309 -- # [[ -z 176352 ]] 00:26:30.107 05:45:34 -- common/autotest_common.sh@309 -- # kill -0 176352 00:26:30.107 05:45:34 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:26:30.107 05:45:34 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:26:30.107 05:45:34 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:26:30.107 05:45:34 -- common/autotest_common.sh@322 -- # local mount target_dir 00:26:30.107 05:45:34 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:26:30.107 05:45:34 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:26:30.107 05:45:34 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:26:30.107 05:45:34 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:26:30.107 05:45:34 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.NCHyfM 00:26:30.107 05:45:34 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:26:30.107 05:45:34 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:26:30.107 05:45:34 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:26:30.107 05:45:34 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.NCHyfM/tests/interrupt /tmp/spdk.NCHyfM 00:26:30.107 05:45:34 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:26:30.107 05:45:34 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:30.107 05:45:34 -- common/autotest_common.sh@318 -- # df -T 00:26:30.107 05:45:34 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:26:30.107 05:45:34 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:30.107 05:45:34 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:30.107 05:45:34 -- common/autotest_common.sh@353 -- # avails["$mount"]=1248935936 00:26:30.107 05:45:34 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253683200 00:26:30.107 05:45:34 -- common/autotest_common.sh@354 -- # uses["$mount"]=4747264 00:26:30.107 05:45:34 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:30.107 05:45:34 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:26:30.107 05:45:34 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:26:30.107 05:45:34 -- common/autotest_common.sh@353 -- # avails["$mount"]=9651793920 00:26:30.107 05:45:34 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20616794112 00:26:30.107 05:45:34 -- common/autotest_common.sh@354 -- # uses["$mount"]=10948222976 00:26:30.107 05:45:34 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:30.107 05:45:34 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:30.107 05:45:34 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:30.107 05:45:34 -- common/autotest_common.sh@353 -- # avails["$mount"]=6265806848 00:26:30.107 05:45:34 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6268399616 00:26:30.107 05:45:34 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:26:30.107 05:45:34 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:30.107 05:45:34 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:30.107 05:45:34 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:30.107 05:45:34 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:26:30.107 05:45:34 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:26:30.107 05:45:34 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:30.107 05:45:34 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:30.107 05:45:34 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:26:30.107 05:45:34 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:26:30.107 05:45:34 -- common/autotest_common.sh@353 -- # avails["$mount"]=103061504 00:26:30.107 05:45:34 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109395968 00:26:30.107 05:45:34 -- common/autotest_common.sh@354 -- # uses["$mount"]=6334464 00:26:30.107 05:45:34 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:30.107 05:45:34 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:30.107 05:45:34 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:30.107 05:45:34 -- common/autotest_common.sh@353 -- # avails["$mount"]=1253675008 00:26:30.107 05:45:34 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253679104 00:26:30.107 05:45:34 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:26:30.108 05:45:34 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:30.108 05:45:34 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:26:30.108 05:45:34 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:26:30.108 05:45:34 -- common/autotest_common.sh@353 -- # avails["$mount"]=98670325760 00:26:30.108 05:45:34 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:26:30.108 05:45:34 -- common/autotest_common.sh@354 -- # uses["$mount"]=1032454144 00:26:30.108 05:45:34 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:30.108 05:45:34 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:26:30.108 * Looking for test storage... 00:26:30.108 05:45:34 -- common/autotest_common.sh@359 -- # local target_space new_size 00:26:30.108 05:45:34 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:26:30.367 05:45:34 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:30.367 05:45:34 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:26:30.367 05:45:34 -- common/autotest_common.sh@363 -- # mount=/ 00:26:30.367 05:45:34 -- common/autotest_common.sh@365 -- # target_space=9651793920 00:26:30.367 05:45:34 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:26:30.367 05:45:34 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:26:30.367 05:45:34 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:26:30.367 05:45:34 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:26:30.367 05:45:34 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:26:30.367 05:45:34 -- common/autotest_common.sh@372 -- # new_size=13162815488 00:26:30.367 05:45:34 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:26:30.367 05:45:34 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:30.367 05:45:34 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:30.367 05:45:34 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:30.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:30.367 05:45:34 -- common/autotest_common.sh@380 -- # return 0 00:26:30.367 05:45:34 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:26:30.367 05:45:34 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:26:30.367 05:45:34 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:26:30.367 05:45:34 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:26:30.367 05:45:34 -- common/autotest_common.sh@1672 -- # true 00:26:30.367 05:45:34 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:26:30.367 05:45:34 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:26:30.367 05:45:34 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:26:30.367 05:45:34 -- common/autotest_common.sh@27 -- # exec 00:26:30.367 05:45:34 -- common/autotest_common.sh@29 -- # exec 00:26:30.367 05:45:34 -- common/autotest_common.sh@31 -- # xtrace_restore 00:26:30.367 05:45:34 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:26:30.367 05:45:34 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:26:30.367 05:45:34 -- common/autotest_common.sh@18 -- # set -x 00:26:30.367 05:45:34 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:30.367 05:45:34 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:26:30.367 05:45:34 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:26:30.367 05:45:34 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:26:30.367 05:45:34 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:26:30.367 05:45:34 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:26:30.367 05:45:34 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:30.367 05:45:34 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:30.367 05:45:34 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:26:30.367 05:45:34 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:30.367 05:45:34 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:26:30.367 05:45:34 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=176398 00:26:30.367 05:45:34 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:30.367 05:45:34 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:26:30.367 05:45:34 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 176398 /var/tmp/spdk.sock 00:26:30.367 05:45:34 -- common/autotest_common.sh@819 -- # '[' -z 176398 ']' 00:26:30.367 05:45:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:30.367 05:45:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:30.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:30.367 05:45:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:30.367 05:45:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:30.367 05:45:34 -- common/autotest_common.sh@10 -- # set +x 00:26:30.367 [2024-10-07 05:45:34.158598] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:30.367 [2024-10-07 05:45:34.158812] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176398 ] 00:26:30.367 [2024-10-07 05:45:34.339539] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:30.626 [2024-10-07 05:45:34.535568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:30.626 [2024-10-07 05:45:34.535729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:30.626 [2024-10-07 05:45:34.535732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.884 [2024-10-07 05:45:34.815655] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:31.143 05:45:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:31.143 05:45:35 -- common/autotest_common.sh@852 -- # return 0 00:26:31.143 05:45:35 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:26:31.143 05:45:35 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:31.710 Malloc0 00:26:31.710 Malloc1 00:26:31.710 Malloc2 00:26:31.710 05:45:35 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:26:31.710 05:45:35 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:26:31.710 05:45:35 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:31.710 05:45:35 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:26:31.710 5000+0 records in 00:26:31.711 5000+0 records out 00:26:31.711 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0210265 s, 487 MB/s 00:26:31.711 05:45:35 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:26:31.969 AIO0 00:26:31.969 05:45:35 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 176398 00:26:31.969 05:45:35 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 176398 without_thd 00:26:31.969 05:45:35 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=176398 00:26:31.969 05:45:35 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:26:31.969 05:45:35 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:26:31.969 05:45:35 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:26:31.969 05:45:35 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:26:31.969 05:45:35 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:31.969 05:45:35 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:26:31.969 05:45:35 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:31.969 05:45:35 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:31.969 05:45:35 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:32.228 05:45:36 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:26:32.228 05:45:36 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:26:32.228 05:45:36 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:26:32.228 05:45:36 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:26:32.228 05:45:36 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:32.228 05:45:36 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:26:32.228 05:45:36 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:32.228 05:45:36 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:32.228 05:45:36 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:32.487 05:45:36 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:26:32.487 05:45:36 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:26:32.487 spdk_thread ids are 1 on reactor0. 00:26:32.487 05:45:36 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:26:32.487 05:45:36 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:32.487 05:45:36 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 176398 0 00:26:32.487 05:45:36 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 176398 0 idle 00:26:32.487 05:45:36 -- interrupt/interrupt_common.sh@33 -- # local pid=176398 00:26:32.487 05:45:36 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:32.487 05:45:36 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:32.487 05:45:36 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:32.487 05:45:36 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:32.487 05:45:36 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:32.487 05:45:36 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:32.487 05:45:36 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:32.487 05:45:36 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 176398 -w 256 00:26:32.487 05:45:36 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:32.487 05:45:36 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 176398 root 20 0 20.1t 146020 28872 S 0.0 1.2 0:00.75 reactor_0' 00:26:32.487 05:45:36 -- interrupt/interrupt_common.sh@48 -- # echo 176398 root 20 0 20.1t 146020 28872 S 0.0 1.2 0:00.75 reactor_0 00:26:32.487 05:45:36 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:32.487 05:45:36 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:32.487 05:45:36 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:32.487 05:45:36 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:32.487 05:45:36 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:32.488 05:45:36 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:32.488 05:45:36 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:32.488 05:45:36 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:32.488 05:45:36 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:32.488 05:45:36 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 176398 1 00:26:32.488 05:45:36 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 176398 1 idle 00:26:32.488 05:45:36 -- interrupt/interrupt_common.sh@33 -- # local pid=176398 00:26:32.488 05:45:36 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:26:32.488 05:45:36 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:32.488 05:45:36 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:32.488 05:45:36 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:32.488 05:45:36 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:32.488 05:45:36 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:32.488 05:45:36 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:32.488 05:45:36 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 176398 -w 256 00:26:32.488 05:45:36 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:26:32.746 05:45:36 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 176402 root 20 0 20.1t 146020 28872 S 0.0 1.2 0:00.00 reactor_1' 00:26:32.746 05:45:36 -- interrupt/interrupt_common.sh@48 -- # echo 176402 root 20 0 20.1t 146020 28872 S 0.0 1.2 0:00.00 reactor_1 00:26:32.746 05:45:36 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:32.746 05:45:36 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:32.746 05:45:36 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:32.746 05:45:36 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:32.746 05:45:36 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:32.746 05:45:36 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:32.746 05:45:36 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:32.746 05:45:36 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:32.746 05:45:36 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:32.746 05:45:36 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 176398 2 00:26:32.746 05:45:36 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 176398 2 idle 00:26:32.746 05:45:36 -- interrupt/interrupt_common.sh@33 -- # local pid=176398 00:26:32.747 05:45:36 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:32.747 05:45:36 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:32.747 05:45:36 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:32.747 05:45:36 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:32.747 05:45:36 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:32.747 05:45:36 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:32.747 05:45:36 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:32.747 05:45:36 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 176398 -w 256 00:26:32.747 05:45:36 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:33.005 05:45:36 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 176403 root 20 0 20.1t 146020 28872 S 0.0 1.2 0:00.00 reactor_2' 00:26:33.005 05:45:36 -- interrupt/interrupt_common.sh@48 -- # echo 176403 root 20 0 20.1t 146020 28872 S 0.0 1.2 0:00.00 reactor_2 00:26:33.005 05:45:36 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:33.005 05:45:36 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:33.005 05:45:36 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:33.005 05:45:36 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:33.005 05:45:36 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:33.005 05:45:36 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:33.005 05:45:36 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:33.005 05:45:36 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:33.005 05:45:36 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:26:33.005 05:45:36 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:26:33.005 05:45:36 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:26:33.005 [2024-10-07 05:45:36.952330] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:33.005 05:45:36 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:26:33.264 [2024-10-07 05:45:37.220104] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:26:33.264 [2024-10-07 05:45:37.220689] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:33.264 05:45:37 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:26:33.522 [2024-10-07 05:45:37.415945] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:26:33.522 [2024-10-07 05:45:37.416380] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:33.522 05:45:37 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:33.522 05:45:37 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 176398 0 00:26:33.523 05:45:37 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 176398 0 busy 00:26:33.523 05:45:37 -- interrupt/interrupt_common.sh@33 -- # local pid=176398 00:26:33.523 05:45:37 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:33.523 05:45:37 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:33.523 05:45:37 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:33.523 05:45:37 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:33.523 05:45:37 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:33.523 05:45:37 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:33.523 05:45:37 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 176398 -w 256 00:26:33.523 05:45:37 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:33.781 05:45:37 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 176398 root 20 0 20.1t 146132 28872 R 93.3 1.2 0:01.12 reactor_0' 00:26:33.781 05:45:37 -- interrupt/interrupt_common.sh@48 -- # echo 176398 root 20 0 20.1t 146132 28872 R 93.3 1.2 0:01.12 reactor_0 00:26:33.781 05:45:37 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:33.781 05:45:37 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:33.781 05:45:37 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=93.3 00:26:33.781 05:45:37 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=93 00:26:33.781 05:45:37 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:33.781 05:45:37 -- interrupt/interrupt_common.sh@51 -- # [[ 93 -lt 70 ]] 00:26:33.781 05:45:37 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:33.781 05:45:37 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:33.781 05:45:37 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:33.781 05:45:37 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 176398 2 00:26:33.781 05:45:37 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 176398 2 busy 00:26:33.781 05:45:37 -- interrupt/interrupt_common.sh@33 -- # local pid=176398 00:26:33.781 05:45:37 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:33.781 05:45:37 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:33.781 05:45:37 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:33.782 05:45:37 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:33.782 05:45:37 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:33.782 05:45:37 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:33.782 05:45:37 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 176398 -w 256 00:26:33.782 05:45:37 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:34.040 05:45:37 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 176403 root 20 0 20.1t 146132 28872 R 99.9 1.2 0:00.34 reactor_2' 00:26:34.041 05:45:37 -- interrupt/interrupt_common.sh@48 -- # echo 176403 root 20 0 20.1t 146132 28872 R 99.9 1.2 0:00.34 reactor_2 00:26:34.041 05:45:37 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:34.041 05:45:37 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:34.041 05:45:37 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:26:34.041 05:45:37 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:26:34.041 05:45:37 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:34.041 05:45:37 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:26:34.041 05:45:37 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:34.041 05:45:37 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:34.041 05:45:37 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:26:34.041 [2024-10-07 05:45:37.959974] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:26:34.041 [2024-10-07 05:45:37.960356] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:34.041 05:45:37 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:26:34.041 05:45:37 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 176398 2 00:26:34.041 05:45:37 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 176398 2 idle 00:26:34.041 05:45:37 -- interrupt/interrupt_common.sh@33 -- # local pid=176398 00:26:34.041 05:45:37 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:34.041 05:45:37 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:34.041 05:45:37 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:34.041 05:45:37 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:34.041 05:45:37 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:34.041 05:45:37 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:34.041 05:45:37 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:34.041 05:45:37 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 176398 -w 256 00:26:34.041 05:45:37 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:34.299 05:45:38 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 176403 root 20 0 20.1t 146196 28872 S 0.0 1.2 0:00.54 reactor_2' 00:26:34.299 05:45:38 -- interrupt/interrupt_common.sh@48 -- # echo 176403 root 20 0 20.1t 146196 28872 S 0.0 1.2 0:00.54 reactor_2 00:26:34.299 05:45:38 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:34.299 05:45:38 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:34.299 05:45:38 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:34.299 05:45:38 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:34.299 05:45:38 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:34.299 05:45:38 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:34.299 05:45:38 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:34.299 05:45:38 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:34.299 05:45:38 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:26:34.558 [2024-10-07 05:45:38.395964] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:26:34.558 [2024-10-07 05:45:38.396288] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:34.558 05:45:38 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:26:34.558 05:45:38 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:26:34.558 05:45:38 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:26:34.817 [2024-10-07 05:45:38.644254] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:34.817 05:45:38 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 176398 0 00:26:34.817 05:45:38 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 176398 0 idle 00:26:34.817 05:45:38 -- interrupt/interrupt_common.sh@33 -- # local pid=176398 00:26:34.817 05:45:38 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:34.817 05:45:38 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:34.817 05:45:38 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:34.817 05:45:38 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:34.817 05:45:38 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:34.817 05:45:38 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:34.817 05:45:38 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:34.817 05:45:38 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 176398 -w 256 00:26:34.817 05:45:38 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:35.076 05:45:38 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 176398 root 20 0 20.1t 146288 28872 S 6.7 1.2 0:01.94 reactor_0' 00:26:35.076 05:45:38 -- interrupt/interrupt_common.sh@48 -- # echo 176398 root 20 0 20.1t 146288 28872 S 6.7 1.2 0:01.94 reactor_0 00:26:35.076 05:45:38 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:35.076 05:45:38 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:35.076 05:45:38 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=6.7 00:26:35.076 05:45:38 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=6 00:26:35.076 05:45:38 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:35.076 05:45:38 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:35.076 05:45:38 -- interrupt/interrupt_common.sh@53 -- # [[ 6 -gt 30 ]] 00:26:35.076 05:45:38 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:35.076 05:45:38 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:26:35.076 05:45:38 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:26:35.076 05:45:38 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:26:35.076 05:45:38 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 176398 00:26:35.076 05:45:38 -- common/autotest_common.sh@926 -- # '[' -z 176398 ']' 00:26:35.076 05:45:38 -- common/autotest_common.sh@930 -- # kill -0 176398 00:26:35.076 05:45:38 -- common/autotest_common.sh@931 -- # uname 00:26:35.076 05:45:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:35.076 05:45:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 176398 00:26:35.076 05:45:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:35.076 05:45:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:35.076 killing process with pid 176398 00:26:35.076 05:45:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 176398' 00:26:35.076 05:45:38 -- common/autotest_common.sh@945 -- # kill 176398 00:26:35.076 05:45:38 -- common/autotest_common.sh@950 -- # wait 176398 00:26:36.455 05:45:40 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:26:36.455 05:45:40 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:26:36.455 05:45:40 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:26:36.455 05:45:40 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.455 05:45:40 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:26:36.455 05:45:40 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=176546 00:26:36.455 05:45:40 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:26:36.455 05:45:40 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:36.455 05:45:40 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 176546 /var/tmp/spdk.sock 00:26:36.456 05:45:40 -- common/autotest_common.sh@819 -- # '[' -z 176546 ']' 00:26:36.456 05:45:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.456 05:45:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:36.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:36.456 05:45:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.456 05:45:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:36.456 05:45:40 -- common/autotest_common.sh@10 -- # set +x 00:26:36.456 [2024-10-07 05:45:40.202780] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:36.456 [2024-10-07 05:45:40.203017] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176546 ] 00:26:36.456 [2024-10-07 05:45:40.382966] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:36.714 [2024-10-07 05:45:40.577332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.714 [2024-10-07 05:45:40.577458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:36.714 [2024-10-07 05:45:40.577465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.974 [2024-10-07 05:45:40.862548] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:37.232 05:45:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:37.232 05:45:41 -- common/autotest_common.sh@852 -- # return 0 00:26:37.232 05:45:41 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:26:37.232 05:45:41 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:37.491 Malloc0 00:26:37.491 Malloc1 00:26:37.491 Malloc2 00:26:37.750 05:45:41 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:26:37.750 05:45:41 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:26:37.750 05:45:41 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:37.750 05:45:41 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:26:37.750 5000+0 records in 00:26:37.750 5000+0 records out 00:26:37.750 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0257919 s, 397 MB/s 00:26:37.750 05:45:41 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:26:38.009 AIO0 00:26:38.009 05:45:41 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 176546 00:26:38.009 05:45:41 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 176546 00:26:38.009 05:45:41 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=176546 00:26:38.009 05:45:41 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:26:38.009 05:45:41 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:26:38.009 05:45:41 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:26:38.009 05:45:41 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:26:38.009 05:45:41 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:38.009 05:45:41 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:26:38.009 05:45:41 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:38.009 05:45:41 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:38.009 05:45:41 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:38.269 05:45:41 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:26:38.269 05:45:42 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:26:38.269 05:45:42 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:26:38.269 05:45:42 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:26:38.269 05:45:42 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:38.269 05:45:42 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:26:38.269 05:45:42 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:38.269 05:45:42 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:38.269 05:45:42 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:38.269 05:45:42 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:26:38.269 05:45:42 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:26:38.269 spdk_thread ids are 1 on reactor0. 00:26:38.269 05:45:42 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:26:38.269 05:45:42 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:38.269 05:45:42 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 176546 0 00:26:38.269 05:45:42 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 176546 0 idle 00:26:38.269 05:45:42 -- interrupt/interrupt_common.sh@33 -- # local pid=176546 00:26:38.269 05:45:42 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:38.269 05:45:42 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:38.269 05:45:42 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:38.269 05:45:42 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:38.269 05:45:42 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:38.269 05:45:42 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:38.269 05:45:42 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:38.269 05:45:42 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 176546 -w 256 00:26:38.269 05:45:42 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:38.528 05:45:42 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 176546 root 20 0 20.1t 146088 29008 S 0.0 1.2 0:00.75 reactor_0' 00:26:38.528 05:45:42 -- interrupt/interrupt_common.sh@48 -- # echo 176546 root 20 0 20.1t 146088 29008 S 0.0 1.2 0:00.75 reactor_0 00:26:38.528 05:45:42 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:38.528 05:45:42 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:38.528 05:45:42 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:38.528 05:45:42 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:38.528 05:45:42 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:38.528 05:45:42 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:38.528 05:45:42 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:38.528 05:45:42 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:38.528 05:45:42 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:38.528 05:45:42 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 176546 1 00:26:38.528 05:45:42 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 176546 1 idle 00:26:38.528 05:45:42 -- interrupt/interrupt_common.sh@33 -- # local pid=176546 00:26:38.528 05:45:42 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:26:38.528 05:45:42 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:38.528 05:45:42 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:38.528 05:45:42 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:38.528 05:45:42 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:38.528 05:45:42 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:38.528 05:45:42 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:38.528 05:45:42 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 176546 -w 256 00:26:38.528 05:45:42 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 176553 root 20 0 20.1t 146088 29008 S 0.0 1.2 0:00.00 reactor_1' 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@48 -- # echo 176553 root 20 0 20.1t 146088 29008 S 0.0 1.2 0:00.00 reactor_1 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:38.788 05:45:42 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:38.788 05:45:42 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 176546 2 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 176546 2 idle 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@33 -- # local pid=176546 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 176546 -w 256 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 176554 root 20 0 20.1t 146088 29008 S 0.0 1.2 0:00.00 reactor_2' 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@48 -- # echo 176554 root 20 0 20.1t 146088 29008 S 0.0 1.2 0:00.00 reactor_2 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:38.788 05:45:42 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:38.788 05:45:42 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:26:38.788 05:45:42 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:26:39.047 [2024-10-07 05:45:42.974852] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:26:39.047 [2024-10-07 05:45:42.975134] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:26:39.047 [2024-10-07 05:45:42.975430] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:39.047 05:45:42 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:26:39.305 [2024-10-07 05:45:43.206722] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:26:39.305 [2024-10-07 05:45:43.207082] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:39.305 05:45:43 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:39.305 05:45:43 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 176546 0 00:26:39.305 05:45:43 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 176546 0 busy 00:26:39.305 05:45:43 -- interrupt/interrupt_common.sh@33 -- # local pid=176546 00:26:39.305 05:45:43 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:39.305 05:45:43 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:39.305 05:45:43 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:39.305 05:45:43 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:39.305 05:45:43 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:39.305 05:45:43 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:39.305 05:45:43 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 176546 -w 256 00:26:39.305 05:45:43 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:39.563 05:45:43 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 176546 root 20 0 20.1t 146176 29008 R 99.9 1.2 0:01.15 reactor_0' 00:26:39.563 05:45:43 -- interrupt/interrupt_common.sh@48 -- # echo 176546 root 20 0 20.1t 146176 29008 R 99.9 1.2 0:01.15 reactor_0 00:26:39.564 05:45:43 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:39.564 05:45:43 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:39.564 05:45:43 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:26:39.564 05:45:43 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:26:39.564 05:45:43 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:39.564 05:45:43 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:26:39.564 05:45:43 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:39.564 05:45:43 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:39.564 05:45:43 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:39.564 05:45:43 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 176546 2 00:26:39.564 05:45:43 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 176546 2 busy 00:26:39.564 05:45:43 -- interrupt/interrupt_common.sh@33 -- # local pid=176546 00:26:39.564 05:45:43 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:39.564 05:45:43 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:39.564 05:45:43 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:39.564 05:45:43 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:39.564 05:45:43 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:39.564 05:45:43 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:39.564 05:45:43 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 176546 -w 256 00:26:39.564 05:45:43 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:39.822 05:45:43 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 176554 root 20 0 20.1t 146176 29008 R 99.9 1.2 0:00.33 reactor_2' 00:26:39.822 05:45:43 -- interrupt/interrupt_common.sh@48 -- # echo 176554 root 20 0 20.1t 146176 29008 R 99.9 1.2 0:00.33 reactor_2 00:26:39.822 05:45:43 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:39.822 05:45:43 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:39.822 05:45:43 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:26:39.822 05:45:43 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:26:39.822 05:45:43 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:39.822 05:45:43 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:26:39.822 05:45:43 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:39.822 05:45:43 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:39.822 05:45:43 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:26:39.822 [2024-10-07 05:45:43.731023] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:26:39.822 [2024-10-07 05:45:43.731247] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:39.822 05:45:43 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:26:39.822 05:45:43 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 176546 2 00:26:39.822 05:45:43 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 176546 2 idle 00:26:39.822 05:45:43 -- interrupt/interrupt_common.sh@33 -- # local pid=176546 00:26:39.822 05:45:43 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:39.822 05:45:43 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:39.822 05:45:43 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:39.822 05:45:43 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:39.822 05:45:43 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:39.822 05:45:43 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:39.822 05:45:43 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:39.822 05:45:43 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 176546 -w 256 00:26:39.822 05:45:43 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:40.081 05:45:43 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 176554 root 20 0 20.1t 146244 29008 S 0.0 1.2 0:00.52 reactor_2' 00:26:40.081 05:45:43 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:40.081 05:45:43 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:40.081 05:45:43 -- interrupt/interrupt_common.sh@48 -- # echo 176554 root 20 0 20.1t 146244 29008 S 0.0 1.2 0:00.52 reactor_2 00:26:40.081 05:45:43 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:40.081 05:45:43 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:40.081 05:45:43 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:40.081 05:45:43 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:40.081 05:45:43 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:40.081 05:45:43 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:40.081 05:45:43 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:26:40.340 [2024-10-07 05:45:44.147088] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:26:40.340 [2024-10-07 05:45:44.147484] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:26:40.340 [2024-10-07 05:45:44.147526] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:40.340 05:45:44 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:26:40.340 05:45:44 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 176546 0 00:26:40.340 05:45:44 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 176546 0 idle 00:26:40.340 05:45:44 -- interrupt/interrupt_common.sh@33 -- # local pid=176546 00:26:40.340 05:45:44 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:40.340 05:45:44 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:40.340 05:45:44 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:40.340 05:45:44 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:40.340 05:45:44 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:40.340 05:45:44 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:40.340 05:45:44 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:40.340 05:45:44 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 176546 -w 256 00:26:40.340 05:45:44 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:40.612 05:45:44 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 176546 root 20 0 20.1t 146288 29008 S 0.0 1.2 0:01.93 reactor_0' 00:26:40.612 05:45:44 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:40.612 05:45:44 -- interrupt/interrupt_common.sh@48 -- # echo 176546 root 20 0 20.1t 146288 29008 S 0.0 1.2 0:01.93 reactor_0 00:26:40.612 05:45:44 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:40.612 05:45:44 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:40.612 05:45:44 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:40.612 05:45:44 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:40.612 05:45:44 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:40.612 05:45:44 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:40.612 05:45:44 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:40.612 05:45:44 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:26:40.613 05:45:44 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:26:40.613 05:45:44 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:26:40.613 05:45:44 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 176546 00:26:40.613 05:45:44 -- common/autotest_common.sh@926 -- # '[' -z 176546 ']' 00:26:40.613 05:45:44 -- common/autotest_common.sh@930 -- # kill -0 176546 00:26:40.613 05:45:44 -- common/autotest_common.sh@931 -- # uname 00:26:40.613 05:45:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:40.613 05:45:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 176546 00:26:40.613 05:45:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:40.613 05:45:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:40.613 05:45:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 176546' 00:26:40.613 killing process with pid 176546 00:26:40.613 05:45:44 -- common/autotest_common.sh@945 -- # kill 176546 00:26:40.613 05:45:44 -- common/autotest_common.sh@950 -- # wait 176546 00:26:42.006 05:45:45 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:26:42.006 05:45:45 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:26:42.006 00:26:42.006 real 0m11.790s 00:26:42.006 user 0m11.714s 00:26:42.006 sys 0m1.889s 00:26:42.006 05:45:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:42.006 05:45:45 -- common/autotest_common.sh@10 -- # set +x 00:26:42.006 ************************************ 00:26:42.006 END TEST reactor_set_interrupt 00:26:42.006 ************************************ 00:26:42.006 05:45:45 -- spdk/autotest.sh@200 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:26:42.006 05:45:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:42.006 05:45:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:42.006 05:45:45 -- common/autotest_common.sh@10 -- # set +x 00:26:42.006 ************************************ 00:26:42.006 START TEST reap_unregistered_poller 00:26:42.006 ************************************ 00:26:42.006 05:45:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:26:42.006 * Looking for test storage... 00:26:42.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:42.006 05:45:45 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:26:42.006 05:45:45 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:26:42.006 05:45:45 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:42.006 05:45:45 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:42.006 05:45:45 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:26:42.006 05:45:45 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:42.006 05:45:45 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:26:42.006 05:45:45 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:26:42.006 05:45:45 -- common/autotest_common.sh@34 -- # set -e 00:26:42.006 05:45:45 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:26:42.006 05:45:45 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:26:42.006 05:45:45 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:26:42.006 05:45:45 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:26:42.006 05:45:45 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:26:42.006 05:45:45 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:26:42.006 05:45:45 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:26:42.006 05:45:45 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:26:42.006 05:45:45 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:26:42.006 05:45:45 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:26:42.006 05:45:45 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:26:42.006 05:45:45 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:26:42.006 05:45:45 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:26:42.006 05:45:45 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:26:42.006 05:45:45 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:26:42.006 05:45:45 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:26:42.006 05:45:45 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:26:42.006 05:45:45 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:26:42.006 05:45:45 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:26:42.006 05:45:45 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:26:42.006 05:45:45 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:26:42.006 05:45:45 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:26:42.006 05:45:45 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:42.006 05:45:45 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:26:42.006 05:45:45 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:26:42.006 05:45:45 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:26:42.006 05:45:45 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:26:42.006 05:45:45 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:26:42.006 05:45:45 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:26:42.006 05:45:45 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:26:42.006 05:45:45 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:26:42.006 05:45:45 -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:26:42.006 05:45:45 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:26:42.006 05:45:45 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:26:42.006 05:45:45 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:26:42.006 05:45:45 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:26:42.006 05:45:45 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:26:42.006 05:45:45 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:26:42.006 05:45:45 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:26:42.006 05:45:45 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:26:42.006 05:45:45 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:26:42.006 05:45:45 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:26:42.006 05:45:45 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:26:42.006 05:45:45 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:26:42.006 05:45:45 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:26:42.006 05:45:45 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:26:42.006 05:45:45 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:26:42.006 05:45:45 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:26:42.006 05:45:45 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:26:42.006 05:45:45 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:26:42.006 05:45:45 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:26:42.006 05:45:45 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:26:42.006 05:45:45 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:26:42.006 05:45:45 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:26:42.006 05:45:45 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:26:42.006 05:45:45 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:26:42.006 05:45:45 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:26:42.006 05:45:45 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:26:42.006 05:45:45 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:26:42.006 05:45:45 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:26:42.006 05:45:45 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:26:42.006 05:45:45 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:26:42.006 05:45:45 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:26:42.006 05:45:45 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:26:42.006 05:45:45 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:26:42.006 05:45:45 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:26:42.006 05:45:45 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:26:42.006 05:45:45 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:26:42.006 05:45:45 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:26:42.006 05:45:45 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:26:42.006 05:45:45 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:26:42.006 05:45:45 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:26:42.006 05:45:45 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:26:42.006 05:45:45 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:26:42.006 05:45:45 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:26:42.006 05:45:45 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:26:42.006 05:45:45 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:26:42.006 05:45:45 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:26:42.006 05:45:45 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:26:42.006 05:45:45 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:26:42.006 05:45:45 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:26:42.006 05:45:45 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:26:42.006 05:45:45 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:26:42.006 05:45:45 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:42.006 05:45:45 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:42.006 05:45:45 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:26:42.006 05:45:45 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:26:42.006 05:45:45 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:26:42.006 05:45:45 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:26:42.006 05:45:45 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:26:42.006 05:45:45 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:26:42.006 05:45:45 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:26:42.006 05:45:45 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:26:42.006 05:45:45 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:26:42.006 05:45:45 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:26:42.006 05:45:45 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:26:42.007 05:45:45 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:26:42.007 05:45:45 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:26:42.007 05:45:45 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:26:42.007 #define SPDK_CONFIG_H 00:26:42.007 #define SPDK_CONFIG_APPS 1 00:26:42.007 #define SPDK_CONFIG_ARCH native 00:26:42.007 #define SPDK_CONFIG_ASAN 1 00:26:42.007 #undef SPDK_CONFIG_AVAHI 00:26:42.007 #undef SPDK_CONFIG_CET 00:26:42.007 #define SPDK_CONFIG_COVERAGE 1 00:26:42.007 #define SPDK_CONFIG_CROSS_PREFIX 00:26:42.007 #undef SPDK_CONFIG_CRYPTO 00:26:42.007 #undef SPDK_CONFIG_CRYPTO_MLX5 00:26:42.007 #undef SPDK_CONFIG_CUSTOMOCF 00:26:42.007 #undef SPDK_CONFIG_DAOS 00:26:42.007 #define SPDK_CONFIG_DAOS_DIR 00:26:42.007 #define SPDK_CONFIG_DEBUG 1 00:26:42.007 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:26:42.007 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:26:42.007 #define SPDK_CONFIG_DPDK_INC_DIR 00:26:42.007 #define SPDK_CONFIG_DPDK_LIB_DIR 00:26:42.007 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:26:42.007 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:42.007 #define SPDK_CONFIG_EXAMPLES 1 00:26:42.007 #undef SPDK_CONFIG_FC 00:26:42.007 #define SPDK_CONFIG_FC_PATH 00:26:42.007 #define SPDK_CONFIG_FIO_PLUGIN 1 00:26:42.007 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:26:42.007 #undef SPDK_CONFIG_FUSE 00:26:42.007 #undef SPDK_CONFIG_FUZZER 00:26:42.007 #define SPDK_CONFIG_FUZZER_LIB 00:26:42.007 #undef SPDK_CONFIG_GOLANG 00:26:42.007 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:26:42.007 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:26:42.007 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:26:42.007 #undef SPDK_CONFIG_HAVE_LIBBSD 00:26:42.007 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:26:42.007 #define SPDK_CONFIG_IDXD 1 00:26:42.007 #undef SPDK_CONFIG_IDXD_KERNEL 00:26:42.007 #undef SPDK_CONFIG_IPSEC_MB 00:26:42.007 #define SPDK_CONFIG_IPSEC_MB_DIR 00:26:42.007 #define SPDK_CONFIG_ISAL 1 00:26:42.007 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:26:42.007 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:26:42.007 #define SPDK_CONFIG_LIBDIR 00:26:42.007 #undef SPDK_CONFIG_LTO 00:26:42.007 #define SPDK_CONFIG_MAX_LCORES 00:26:42.007 #define SPDK_CONFIG_NVME_CUSE 1 00:26:42.007 #undef SPDK_CONFIG_OCF 00:26:42.007 #define SPDK_CONFIG_OCF_PATH 00:26:42.007 #define SPDK_CONFIG_OPENSSL_PATH 00:26:42.007 #undef SPDK_CONFIG_PGO_CAPTURE 00:26:42.007 #undef SPDK_CONFIG_PGO_USE 00:26:42.007 #define SPDK_CONFIG_PREFIX /usr/local 00:26:42.007 #define SPDK_CONFIG_RAID5F 1 00:26:42.007 #undef SPDK_CONFIG_RBD 00:26:42.007 #define SPDK_CONFIG_RDMA 1 00:26:42.007 #define SPDK_CONFIG_RDMA_PROV verbs 00:26:42.007 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:26:42.007 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:26:42.007 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:26:42.007 #undef SPDK_CONFIG_SHARED 00:26:42.007 #undef SPDK_CONFIG_SMA 00:26:42.007 #define SPDK_CONFIG_TESTS 1 00:26:42.007 #undef SPDK_CONFIG_TSAN 00:26:42.007 #undef SPDK_CONFIG_UBLK 00:26:42.007 #define SPDK_CONFIG_UBSAN 1 00:26:42.007 #define SPDK_CONFIG_UNIT_TESTS 1 00:26:42.007 #undef SPDK_CONFIG_URING 00:26:42.007 #define SPDK_CONFIG_URING_PATH 00:26:42.007 #undef SPDK_CONFIG_URING_ZNS 00:26:42.007 #undef SPDK_CONFIG_USDT 00:26:42.007 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:26:42.007 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:26:42.007 #undef SPDK_CONFIG_VFIO_USER 00:26:42.007 #define SPDK_CONFIG_VFIO_USER_DIR 00:26:42.007 #define SPDK_CONFIG_VHOST 1 00:26:42.007 #define SPDK_CONFIG_VIRTIO 1 00:26:42.007 #undef SPDK_CONFIG_VTUNE 00:26:42.007 #define SPDK_CONFIG_VTUNE_DIR 00:26:42.007 #define SPDK_CONFIG_WERROR 1 00:26:42.007 #define SPDK_CONFIG_WPDK_DIR 00:26:42.007 #undef SPDK_CONFIG_XNVME 00:26:42.007 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:26:42.007 05:45:45 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:26:42.007 05:45:45 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:42.007 05:45:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:42.007 05:45:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:42.007 05:45:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:42.007 05:45:45 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:42.007 05:45:45 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:42.007 05:45:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:42.007 05:45:45 -- paths/export.sh@5 -- # export PATH 00:26:42.007 05:45:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:42.007 05:45:45 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:42.007 05:45:45 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:42.007 05:45:45 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:42.007 05:45:45 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:42.007 05:45:45 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:26:42.007 05:45:45 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:26:42.007 05:45:45 -- pm/common@16 -- # TEST_TAG=N/A 00:26:42.007 05:45:45 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:26:42.007 05:45:45 -- common/autotest_common.sh@52 -- # : 1 00:26:42.007 05:45:45 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:26:42.007 05:45:45 -- common/autotest_common.sh@56 -- # : 0 00:26:42.007 05:45:45 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:26:42.007 05:45:45 -- common/autotest_common.sh@58 -- # : 0 00:26:42.007 05:45:45 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:26:42.007 05:45:45 -- common/autotest_common.sh@60 -- # : 1 00:26:42.007 05:45:45 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:26:42.007 05:45:45 -- common/autotest_common.sh@62 -- # : 1 00:26:42.007 05:45:45 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:26:42.007 05:45:45 -- common/autotest_common.sh@64 -- # : 00:26:42.007 05:45:45 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:26:42.007 05:45:45 -- common/autotest_common.sh@66 -- # : 0 00:26:42.007 05:45:45 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:26:42.007 05:45:45 -- common/autotest_common.sh@68 -- # : 0 00:26:42.007 05:45:45 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:26:42.007 05:45:45 -- common/autotest_common.sh@70 -- # : 0 00:26:42.007 05:45:45 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:26:42.007 05:45:45 -- common/autotest_common.sh@72 -- # : 0 00:26:42.007 05:45:45 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:26:42.007 05:45:45 -- common/autotest_common.sh@74 -- # : 1 00:26:42.007 05:45:45 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:26:42.007 05:45:45 -- common/autotest_common.sh@76 -- # : 0 00:26:42.007 05:45:45 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:26:42.007 05:45:45 -- common/autotest_common.sh@78 -- # : 0 00:26:42.007 05:45:45 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:26:42.007 05:45:45 -- common/autotest_common.sh@80 -- # : 0 00:26:42.007 05:45:45 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:26:42.007 05:45:45 -- common/autotest_common.sh@82 -- # : 0 00:26:42.007 05:45:45 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:26:42.007 05:45:45 -- common/autotest_common.sh@84 -- # : 0 00:26:42.007 05:45:45 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:26:42.007 05:45:45 -- common/autotest_common.sh@86 -- # : 0 00:26:42.007 05:45:45 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:26:42.007 05:45:45 -- common/autotest_common.sh@88 -- # : 0 00:26:42.007 05:45:45 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:26:42.007 05:45:45 -- common/autotest_common.sh@90 -- # : 0 00:26:42.007 05:45:45 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:26:42.007 05:45:45 -- common/autotest_common.sh@92 -- # : 0 00:26:42.007 05:45:45 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:26:42.007 05:45:45 -- common/autotest_common.sh@94 -- # : 0 00:26:42.007 05:45:45 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:26:42.007 05:45:45 -- common/autotest_common.sh@96 -- # : rdma 00:26:42.007 05:45:45 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:26:42.007 05:45:45 -- common/autotest_common.sh@98 -- # : 0 00:26:42.007 05:45:45 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:26:42.007 05:45:45 -- common/autotest_common.sh@100 -- # : 0 00:26:42.007 05:45:45 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:26:42.007 05:45:45 -- common/autotest_common.sh@102 -- # : 1 00:26:42.007 05:45:45 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:26:42.007 05:45:45 -- common/autotest_common.sh@104 -- # : 0 00:26:42.007 05:45:45 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:26:42.007 05:45:45 -- common/autotest_common.sh@106 -- # : 0 00:26:42.007 05:45:45 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:26:42.007 05:45:45 -- common/autotest_common.sh@108 -- # : 0 00:26:42.007 05:45:45 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:26:42.007 05:45:45 -- common/autotest_common.sh@110 -- # : 0 00:26:42.007 05:45:45 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:26:42.007 05:45:45 -- common/autotest_common.sh@112 -- # : 0 00:26:42.007 05:45:45 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:26:42.007 05:45:45 -- common/autotest_common.sh@114 -- # : 1 00:26:42.007 05:45:45 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:26:42.008 05:45:45 -- common/autotest_common.sh@116 -- # : 1 00:26:42.008 05:45:45 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:26:42.008 05:45:45 -- common/autotest_common.sh@118 -- # : 00:26:42.008 05:45:45 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:26:42.008 05:45:45 -- common/autotest_common.sh@120 -- # : 0 00:26:42.008 05:45:45 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:26:42.008 05:45:45 -- common/autotest_common.sh@122 -- # : 0 00:26:42.008 05:45:45 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:26:42.008 05:45:45 -- common/autotest_common.sh@124 -- # : 0 00:26:42.008 05:45:45 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:26:42.008 05:45:45 -- common/autotest_common.sh@126 -- # : 0 00:26:42.008 05:45:45 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:26:42.008 05:45:45 -- common/autotest_common.sh@128 -- # : 0 00:26:42.008 05:45:45 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:26:42.008 05:45:45 -- common/autotest_common.sh@130 -- # : 0 00:26:42.008 05:45:45 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:26:42.008 05:45:45 -- common/autotest_common.sh@132 -- # : 00:26:42.008 05:45:45 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:26:42.008 05:45:45 -- common/autotest_common.sh@134 -- # : true 00:26:42.008 05:45:45 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:26:42.008 05:45:45 -- common/autotest_common.sh@136 -- # : 1 00:26:42.008 05:45:45 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:26:42.008 05:45:45 -- common/autotest_common.sh@138 -- # : 0 00:26:42.008 05:45:45 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:26:42.008 05:45:45 -- common/autotest_common.sh@140 -- # : 0 00:26:42.008 05:45:45 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:26:42.008 05:45:45 -- common/autotest_common.sh@142 -- # : 0 00:26:42.008 05:45:45 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:26:42.008 05:45:45 -- common/autotest_common.sh@144 -- # : 0 00:26:42.008 05:45:45 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:26:42.008 05:45:45 -- common/autotest_common.sh@146 -- # : 0 00:26:42.008 05:45:45 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:26:42.008 05:45:45 -- common/autotest_common.sh@148 -- # : 00:26:42.008 05:45:45 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:26:42.008 05:45:45 -- common/autotest_common.sh@150 -- # : 0 00:26:42.008 05:45:45 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:26:42.008 05:45:45 -- common/autotest_common.sh@152 -- # : 0 00:26:42.008 05:45:45 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:26:42.008 05:45:45 -- common/autotest_common.sh@154 -- # : 0 00:26:42.008 05:45:45 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:26:42.008 05:45:45 -- common/autotest_common.sh@156 -- # : 0 00:26:42.008 05:45:45 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:26:42.008 05:45:45 -- common/autotest_common.sh@158 -- # : 0 00:26:42.008 05:45:45 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:26:42.008 05:45:45 -- common/autotest_common.sh@160 -- # : 0 00:26:42.008 05:45:45 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:26:42.008 05:45:45 -- common/autotest_common.sh@163 -- # : 00:26:42.008 05:45:45 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:26:42.008 05:45:45 -- common/autotest_common.sh@165 -- # : 0 00:26:42.008 05:45:45 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:26:42.008 05:45:45 -- common/autotest_common.sh@167 -- # : 0 00:26:42.008 05:45:45 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:26:42.008 05:45:45 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:42.008 05:45:45 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:42.008 05:45:45 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:26:42.008 05:45:45 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:26:42.008 05:45:45 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:42.008 05:45:45 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:42.008 05:45:45 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:42.008 05:45:45 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:42.008 05:45:45 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:26:42.008 05:45:45 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:26:42.008 05:45:45 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:42.008 05:45:45 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:42.008 05:45:45 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:26:42.008 05:45:45 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:26:42.008 05:45:45 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:42.008 05:45:45 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:42.008 05:45:45 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:42.008 05:45:45 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:42.008 05:45:45 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:26:42.008 05:45:45 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:26:42.008 05:45:45 -- common/autotest_common.sh@196 -- # cat 00:26:42.008 05:45:45 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:26:42.008 05:45:45 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:42.008 05:45:45 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:42.008 05:45:45 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:42.008 05:45:45 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:42.008 05:45:45 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:26:42.008 05:45:45 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:26:42.008 05:45:45 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:42.008 05:45:45 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:42.008 05:45:45 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:42.008 05:45:45 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:42.008 05:45:45 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:26:42.008 05:45:45 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:26:42.008 05:45:45 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:42.008 05:45:45 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:42.008 05:45:45 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:42.008 05:45:45 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:42.008 05:45:45 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:42.008 05:45:45 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:42.008 05:45:45 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:26:42.008 05:45:45 -- common/autotest_common.sh@249 -- # export valgrind= 00:26:42.008 05:45:45 -- common/autotest_common.sh@249 -- # valgrind= 00:26:42.008 05:45:45 -- common/autotest_common.sh@255 -- # uname -s 00:26:42.008 05:45:45 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:26:42.008 05:45:45 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:26:42.008 05:45:45 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:26:42.008 05:45:45 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:26:42.008 05:45:45 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:26:42.008 05:45:45 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:26:42.008 05:45:45 -- common/autotest_common.sh@265 -- # MAKE=make 00:26:42.008 05:45:45 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:26:42.008 05:45:45 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:26:42.008 05:45:45 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:26:42.008 05:45:45 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:26:42.008 05:45:45 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:26:42.008 05:45:45 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:26:42.008 05:45:45 -- common/autotest_common.sh@309 -- # [[ -z 176728 ]] 00:26:42.008 05:45:45 -- common/autotest_common.sh@309 -- # kill -0 176728 00:26:42.008 05:45:45 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:26:42.008 05:45:45 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:26:42.008 05:45:45 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:26:42.008 05:45:45 -- common/autotest_common.sh@322 -- # local mount target_dir 00:26:42.008 05:45:45 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:26:42.008 05:45:45 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:26:42.008 05:45:45 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:26:42.008 05:45:45 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:26:42.008 05:45:45 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.9ncDdo 00:26:42.008 05:45:45 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:26:42.008 05:45:45 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:26:42.008 05:45:45 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:26:42.008 05:45:45 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.9ncDdo/tests/interrupt /tmp/spdk.9ncDdo 00:26:42.008 05:45:45 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:26:42.008 05:45:45 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:42.008 05:45:45 -- common/autotest_common.sh@318 -- # df -T 00:26:42.009 05:45:45 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:26:42.009 05:45:45 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:42.009 05:45:45 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:42.009 05:45:45 -- common/autotest_common.sh@353 -- # avails["$mount"]=1248935936 00:26:42.009 05:45:45 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253683200 00:26:42.009 05:45:45 -- common/autotest_common.sh@354 -- # uses["$mount"]=4747264 00:26:42.009 05:45:45 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:42.009 05:45:45 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:26:42.009 05:45:45 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:26:42.009 05:45:45 -- common/autotest_common.sh@353 -- # avails["$mount"]=9651748864 00:26:42.009 05:45:45 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20616794112 00:26:42.009 05:45:45 -- common/autotest_common.sh@354 -- # uses["$mount"]=10948268032 00:26:42.009 05:45:45 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:42.009 05:45:45 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:42.009 05:45:45 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:42.009 05:45:45 -- common/autotest_common.sh@353 -- # avails["$mount"]=6265806848 00:26:42.009 05:45:45 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6268399616 00:26:42.009 05:45:45 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:26:42.009 05:45:45 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:42.009 05:45:45 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:42.009 05:45:45 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:42.009 05:45:45 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:26:42.009 05:45:45 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:26:42.009 05:45:45 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:42.009 05:45:45 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:42.009 05:45:45 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:26:42.009 05:45:45 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:26:42.009 05:45:45 -- common/autotest_common.sh@353 -- # avails["$mount"]=103061504 00:26:42.009 05:45:45 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109395968 00:26:42.009 05:45:45 -- common/autotest_common.sh@354 -- # uses["$mount"]=6334464 00:26:42.009 05:45:45 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:42.009 05:45:45 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:42.009 05:45:45 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:42.009 05:45:45 -- common/autotest_common.sh@353 -- # avails["$mount"]=1253675008 00:26:42.009 05:45:45 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253679104 00:26:42.009 05:45:45 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:26:42.009 05:45:45 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:42.009 05:45:45 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:26:42.009 05:45:45 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:26:42.009 05:45:45 -- common/autotest_common.sh@353 -- # avails["$mount"]=98670235648 00:26:42.009 05:45:45 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:26:42.009 05:45:45 -- common/autotest_common.sh@354 -- # uses["$mount"]=1032544256 00:26:42.009 05:45:45 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:42.009 05:45:45 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:26:42.009 * Looking for test storage... 00:26:42.009 05:45:45 -- common/autotest_common.sh@359 -- # local target_space new_size 00:26:42.009 05:45:45 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:26:42.009 05:45:45 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:26:42.009 05:45:45 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:42.009 05:45:45 -- common/autotest_common.sh@363 -- # mount=/ 00:26:42.009 05:45:45 -- common/autotest_common.sh@365 -- # target_space=9651748864 00:26:42.009 05:45:45 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:26:42.009 05:45:45 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:26:42.009 05:45:45 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:26:42.009 05:45:45 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:26:42.009 05:45:45 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:26:42.009 05:45:45 -- common/autotest_common.sh@372 -- # new_size=13162860544 00:26:42.009 05:45:45 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:26:42.009 05:45:45 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:42.009 05:45:45 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:42.009 05:45:45 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:42.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:42.009 05:45:45 -- common/autotest_common.sh@380 -- # return 0 00:26:42.009 05:45:45 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:26:42.009 05:45:45 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:26:42.009 05:45:45 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:26:42.009 05:45:45 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:26:42.009 05:45:45 -- common/autotest_common.sh@1672 -- # true 00:26:42.009 05:45:45 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:26:42.009 05:45:45 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:26:42.009 05:45:45 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:26:42.009 05:45:45 -- common/autotest_common.sh@27 -- # exec 00:26:42.009 05:45:45 -- common/autotest_common.sh@29 -- # exec 00:26:42.009 05:45:45 -- common/autotest_common.sh@31 -- # xtrace_restore 00:26:42.009 05:45:45 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:26:42.009 05:45:45 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:26:42.009 05:45:45 -- common/autotest_common.sh@18 -- # set -x 00:26:42.009 05:45:45 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:42.009 05:45:45 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:26:42.009 05:45:45 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:26:42.009 05:45:45 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:26:42.009 05:45:45 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:26:42.009 05:45:45 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:26:42.009 05:45:45 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:42.009 05:45:45 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:42.009 05:45:45 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:26:42.009 05:45:45 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.009 05:45:45 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:26:42.009 05:45:45 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=176768 00:26:42.009 05:45:45 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:42.009 05:45:45 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 176768 /var/tmp/spdk.sock 00:26:42.009 05:45:45 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:26:42.009 05:45:45 -- common/autotest_common.sh@819 -- # '[' -z 176768 ']' 00:26:42.009 05:45:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.009 05:45:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:42.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:42.009 05:45:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:42.009 05:45:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:42.009 05:45:45 -- common/autotest_common.sh@10 -- # set +x 00:26:42.009 [2024-10-07 05:45:45.917005] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:42.009 [2024-10-07 05:45:45.917218] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176768 ] 00:26:42.269 [2024-10-07 05:45:46.109807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:42.528 [2024-10-07 05:45:46.365261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.528 [2024-10-07 05:45:46.365401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:42.528 [2024-10-07 05:45:46.365412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.787 [2024-10-07 05:45:46.660145] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:43.046 05:45:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:43.046 05:45:46 -- common/autotest_common.sh@852 -- # return 0 00:26:43.046 05:45:46 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:26:43.046 05:45:46 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:26:43.046 05:45:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:43.046 05:45:46 -- common/autotest_common.sh@10 -- # set +x 00:26:43.046 05:45:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:43.046 05:45:46 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:26:43.046 "name": "app_thread", 00:26:43.046 "id": 1, 00:26:43.046 "active_pollers": [], 00:26:43.046 "timed_pollers": [ 00:26:43.046 { 00:26:43.046 "name": "rpc_subsystem_poll", 00:26:43.046 "id": 1, 00:26:43.046 "state": "waiting", 00:26:43.046 "run_count": 0, 00:26:43.046 "busy_count": 0, 00:26:43.046 "period_ticks": 8800000 00:26:43.046 } 00:26:43.046 ], 00:26:43.046 "paused_pollers": [] 00:26:43.046 }' 00:26:43.046 05:45:46 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:26:43.046 05:45:46 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:26:43.046 05:45:46 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:26:43.046 05:45:46 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:26:43.046 05:45:47 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll 00:26:43.046 05:45:47 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:26:43.046 05:45:47 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:26:43.046 05:45:47 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:43.046 05:45:47 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:26:43.305 5000+0 records in 00:26:43.305 5000+0 records out 00:26:43.305 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0198644 s, 515 MB/s 00:26:43.305 05:45:47 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:26:43.305 AIO0 00:26:43.305 05:45:47 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:26:43.564 05:45:47 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:26:43.822 05:45:47 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:26:43.822 05:45:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:43.822 05:45:47 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:26:43.822 05:45:47 -- common/autotest_common.sh@10 -- # set +x 00:26:43.822 05:45:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:43.822 05:45:47 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:26:43.822 "name": "app_thread", 00:26:43.822 "id": 1, 00:26:43.822 "active_pollers": [], 00:26:43.822 "timed_pollers": [ 00:26:43.822 { 00:26:43.822 "name": "rpc_subsystem_poll", 00:26:43.822 "id": 1, 00:26:43.822 "state": "waiting", 00:26:43.822 "run_count": 0, 00:26:43.822 "busy_count": 0, 00:26:43.822 "period_ticks": 8800000 00:26:43.822 } 00:26:43.822 ], 00:26:43.822 "paused_pollers": [] 00:26:43.822 }' 00:26:43.822 05:45:47 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:26:43.822 05:45:47 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:26:43.822 05:45:47 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:26:43.822 05:45:47 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:26:44.081 05:45:47 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll 00:26:44.081 05:45:47 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l ]] 00:26:44.081 05:45:47 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:26:44.081 05:45:47 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 176768 00:26:44.081 05:45:47 -- common/autotest_common.sh@926 -- # '[' -z 176768 ']' 00:26:44.081 05:45:47 -- common/autotest_common.sh@930 -- # kill -0 176768 00:26:44.081 05:45:47 -- common/autotest_common.sh@931 -- # uname 00:26:44.081 05:45:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:44.081 05:45:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 176768 00:26:44.081 05:45:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:44.081 05:45:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:44.081 killing process with pid 176768 00:26:44.081 05:45:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 176768' 00:26:44.081 05:45:47 -- common/autotest_common.sh@945 -- # kill 176768 00:26:44.081 05:45:47 -- common/autotest_common.sh@950 -- # wait 176768 00:26:45.016 05:45:48 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:26:45.016 05:45:48 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:26:45.016 00:26:45.016 real 0m3.232s 00:26:45.016 user 0m2.720s 00:26:45.016 sys 0m0.603s 00:26:45.016 05:45:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:45.016 ************************************ 00:26:45.016 END TEST reap_unregistered_poller 00:26:45.016 ************************************ 00:26:45.016 05:45:48 -- common/autotest_common.sh@10 -- # set +x 00:26:45.016 05:45:48 -- spdk/autotest.sh@204 -- # uname -s 00:26:45.016 05:45:48 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:26:45.016 05:45:48 -- spdk/autotest.sh@205 -- # [[ 1 -eq 1 ]] 00:26:45.016 05:45:48 -- spdk/autotest.sh@211 -- # [[ 0 -eq 0 ]] 00:26:45.016 05:45:48 -- spdk/autotest.sh@212 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:26:45.016 05:45:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:45.016 05:45:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:45.016 05:45:48 -- common/autotest_common.sh@10 -- # set +x 00:26:45.016 ************************************ 00:26:45.016 START TEST spdk_dd 00:26:45.016 ************************************ 00:26:45.016 05:45:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:26:45.274 * Looking for test storage... 00:26:45.274 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:45.275 05:45:49 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:45.275 05:45:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:45.275 05:45:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:45.275 05:45:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:45.275 05:45:49 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:45.275 05:45:49 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:45.275 05:45:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:45.275 05:45:49 -- paths/export.sh@5 -- # export PATH 00:26:45.275 05:45:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:45.275 05:45:49 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:45.534 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:26:45.534 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:46.910 05:45:50 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:26:46.910 05:45:50 -- dd/dd.sh@11 -- # nvme_in_userspace 00:26:46.910 05:45:50 -- scripts/common.sh@311 -- # local bdf bdfs 00:26:46.910 05:45:50 -- scripts/common.sh@312 -- # local nvmes 00:26:46.910 05:45:50 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:26:46.910 05:45:50 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:46.910 05:45:50 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:26:46.910 05:45:50 -- scripts/common.sh@297 -- # local bdf= 00:26:46.910 05:45:50 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:26:46.910 05:45:50 -- scripts/common.sh@232 -- # local class 00:26:46.910 05:45:50 -- scripts/common.sh@233 -- # local subclass 00:26:46.910 05:45:50 -- scripts/common.sh@234 -- # local progif 00:26:46.910 05:45:50 -- scripts/common.sh@235 -- # printf %02x 1 00:26:46.910 05:45:50 -- scripts/common.sh@235 -- # class=01 00:26:46.910 05:45:50 -- scripts/common.sh@236 -- # printf %02x 8 00:26:46.910 05:45:50 -- scripts/common.sh@236 -- # subclass=08 00:26:46.910 05:45:50 -- scripts/common.sh@237 -- # printf %02x 2 00:26:46.910 05:45:50 -- scripts/common.sh@237 -- # progif=02 00:26:46.910 05:45:50 -- scripts/common.sh@239 -- # hash lspci 00:26:46.910 05:45:50 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:26:46.910 05:45:50 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:26:46.910 05:45:50 -- scripts/common.sh@242 -- # grep -i -- -p02 00:26:46.910 05:45:50 -- scripts/common.sh@244 -- # tr -d '"' 00:26:46.910 05:45:50 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:46.910 05:45:50 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:46.910 05:45:50 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:26:46.910 05:45:50 -- scripts/common.sh@15 -- # local i 00:26:46.910 05:45:50 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:26:46.910 05:45:50 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:46.910 05:45:50 -- scripts/common.sh@24 -- # return 0 00:26:46.910 05:45:50 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:26:46.910 05:45:50 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:46.910 05:45:50 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:26:46.910 05:45:50 -- scripts/common.sh@322 -- # uname -s 00:26:46.910 05:45:50 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:46.910 05:45:50 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:46.910 05:45:50 -- scripts/common.sh@327 -- # (( 1 )) 00:26:46.910 05:45:50 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 00:26:46.910 05:45:50 -- dd/dd.sh@13 -- # check_liburing 00:26:46.910 05:45:50 -- dd/common.sh@139 -- # local lib so 00:26:46.910 05:45:50 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:26:46.910 05:45:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:46.910 05:45:50 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:26:46.910 05:45:50 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:46.910 05:45:50 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:26:46.910 05:45:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:46.910 05:45:50 -- dd/common.sh@143 -- # [[ libasan.so.6 == liburing.so.* ]] 00:26:46.910 05:45:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:46.910 05:45:50 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:26:46.910 05:45:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:46.910 05:45:50 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:26:46.910 05:45:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:46.910 05:45:50 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:26:46.910 05:45:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:46.910 05:45:50 -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:26:46.910 05:45:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:46.910 05:45:50 -- dd/common.sh@143 -- # [[ libssl.so.3 == liburing.so.* ]] 00:26:46.910 05:45:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:46.910 05:45:50 -- dd/common.sh@143 -- # [[ libcrypto.so.3 == liburing.so.* ]] 00:26:46.910 05:45:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:46.910 05:45:50 -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:26:46.910 05:45:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:46.910 05:45:50 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:26:46.910 05:45:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:46.910 05:45:50 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:26:46.910 05:45:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:46.910 05:45:50 -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:26:46.910 05:45:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:46.910 05:45:50 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:26:46.910 05:45:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:46.910 05:45:50 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:26:46.910 05:45:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:46.910 05:45:50 -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:26:46.910 05:45:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:46.910 05:45:50 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:26:46.910 05:45:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:46.910 05:45:50 -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:26:46.910 05:45:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:46.910 05:45:50 -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:26:46.910 05:45:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:46.910 05:45:50 -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:26:46.910 05:45:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:46.910 05:45:50 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:26:46.910 05:45:50 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:26:46.910 05:45:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:46.910 05:45:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:46.910 05:45:50 -- common/autotest_common.sh@10 -- # set +x 00:26:46.910 ************************************ 00:26:46.910 START TEST spdk_dd_basic_rw 00:26:46.910 ************************************ 00:26:46.910 05:45:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:26:47.168 * Looking for test storage... 00:26:47.168 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:47.169 05:45:50 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:47.169 05:45:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:47.169 05:45:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:47.169 05:45:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:47.169 05:45:50 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:47.169 05:45:50 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:47.169 05:45:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:47.169 05:45:50 -- paths/export.sh@5 -- # export PATH 00:26:47.169 05:45:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:47.169 05:45:50 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:26:47.169 05:45:50 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:26:47.169 05:45:50 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:26:47.169 05:45:50 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:26:47.169 05:45:50 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:26:47.169 05:45:50 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:26:47.169 05:45:50 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:26:47.169 05:45:50 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:47.169 05:45:50 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:47.169 05:45:50 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:26:47.169 05:45:50 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:26:47.169 05:45:50 -- dd/common.sh@126 -- # mapfile -t id 00:26:47.169 05:45:50 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:26:47.429 05:45:51 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 105 Data Units Written: 7 Host Read Commands: 2224 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:26:47.429 05:45:51 -- dd/common.sh@130 -- # lbaf=04 00:26:47.430 05:45:51 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 105 Data Units Written: 7 Host Read Commands: 2224 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:26:47.430 05:45:51 -- dd/common.sh@132 -- # lbaf=4096 00:26:47.430 05:45:51 -- dd/common.sh@134 -- # echo 4096 00:26:47.430 05:45:51 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:26:47.430 05:45:51 -- dd/basic_rw.sh@96 -- # : 00:26:47.430 05:45:51 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:26:47.430 05:45:51 -- dd/basic_rw.sh@96 -- # gen_conf 00:26:47.430 05:45:51 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:26:47.430 05:45:51 -- dd/common.sh@31 -- # xtrace_disable 00:26:47.430 05:45:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:47.430 05:45:51 -- common/autotest_common.sh@10 -- # set +x 00:26:47.430 05:45:51 -- common/autotest_common.sh@10 -- # set +x 00:26:47.430 ************************************ 00:26:47.430 START TEST dd_bs_lt_native_bs 00:26:47.430 ************************************ 00:26:47.430 05:45:51 -- common/autotest_common.sh@1104 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:26:47.430 05:45:51 -- common/autotest_common.sh@640 -- # local es=0 00:26:47.430 05:45:51 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:26:47.430 05:45:51 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:47.430 05:45:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:47.430 05:45:51 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:47.430 05:45:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:47.430 05:45:51 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:47.430 05:45:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:47.430 05:45:51 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:47.430 05:45:51 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:47.430 05:45:51 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:26:47.430 { 00:26:47.430 "subsystems": [ 00:26:47.430 { 00:26:47.430 "subsystem": "bdev", 00:26:47.430 "config": [ 00:26:47.430 { 00:26:47.430 "params": { 00:26:47.430 "trtype": "pcie", 00:26:47.430 "traddr": "0000:00:06.0", 00:26:47.430 "name": "Nvme0" 00:26:47.430 }, 00:26:47.430 "method": "bdev_nvme_attach_controller" 00:26:47.430 }, 00:26:47.430 { 00:26:47.430 "method": "bdev_wait_for_examine" 00:26:47.430 } 00:26:47.430 ] 00:26:47.431 } 00:26:47.431 ] 00:26:47.431 } 00:26:47.431 [2024-10-07 05:45:51.297970] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:47.431 [2024-10-07 05:45:51.298173] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177079 ] 00:26:47.689 [2024-10-07 05:45:51.464133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.689 [2024-10-07 05:45:51.657585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.269 [2024-10-07 05:45:52.010839] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:26:48.269 [2024-10-07 05:45:52.010933] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:48.833 [2024-10-07 05:45:52.645669] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:49.089 05:45:53 -- common/autotest_common.sh@643 -- # es=234 00:26:49.089 05:45:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:49.089 05:45:53 -- common/autotest_common.sh@652 -- # es=106 00:26:49.089 05:45:53 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:49.089 05:45:53 -- common/autotest_common.sh@660 -- # es=1 00:26:49.089 05:45:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:49.089 00:26:49.089 real 0m1.798s 00:26:49.089 user 0m1.481s 00:26:49.089 sys 0m0.277s 00:26:49.089 05:45:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:49.089 05:45:53 -- common/autotest_common.sh@10 -- # set +x 00:26:49.089 ************************************ 00:26:49.089 END TEST dd_bs_lt_native_bs 00:26:49.089 ************************************ 00:26:49.089 05:45:53 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:26:49.089 05:45:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:49.089 05:45:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:49.089 05:45:53 -- common/autotest_common.sh@10 -- # set +x 00:26:49.346 ************************************ 00:26:49.346 START TEST dd_rw 00:26:49.346 ************************************ 00:26:49.346 05:45:53 -- common/autotest_common.sh@1104 -- # basic_rw 4096 00:26:49.346 05:45:53 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:26:49.346 05:45:53 -- dd/basic_rw.sh@12 -- # local count size 00:26:49.346 05:45:53 -- dd/basic_rw.sh@13 -- # local qds bss 00:26:49.346 05:45:53 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:26:49.346 05:45:53 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:26:49.346 05:45:53 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:26:49.346 05:45:53 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:26:49.346 05:45:53 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:26:49.346 05:45:53 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:26:49.346 05:45:53 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:26:49.346 05:45:53 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:26:49.346 05:45:53 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:49.346 05:45:53 -- dd/basic_rw.sh@23 -- # count=15 00:26:49.346 05:45:53 -- dd/basic_rw.sh@24 -- # count=15 00:26:49.346 05:45:53 -- dd/basic_rw.sh@25 -- # size=61440 00:26:49.346 05:45:53 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:26:49.346 05:45:53 -- dd/common.sh@98 -- # xtrace_disable 00:26:49.346 05:45:53 -- common/autotest_common.sh@10 -- # set +x 00:26:49.912 05:45:53 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:26:49.912 05:45:53 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:49.912 05:45:53 -- dd/common.sh@31 -- # xtrace_disable 00:26:49.912 05:45:53 -- common/autotest_common.sh@10 -- # set +x 00:26:49.912 { 00:26:49.912 "subsystems": [ 00:26:49.912 { 00:26:49.912 "subsystem": "bdev", 00:26:49.912 "config": [ 00:26:49.912 { 00:26:49.913 "params": { 00:26:49.913 "trtype": "pcie", 00:26:49.913 "traddr": "0000:00:06.0", 00:26:49.913 "name": "Nvme0" 00:26:49.913 }, 00:26:49.913 "method": "bdev_nvme_attach_controller" 00:26:49.913 }, 00:26:49.913 { 00:26:49.913 "method": "bdev_wait_for_examine" 00:26:49.913 } 00:26:49.913 ] 00:26:49.913 } 00:26:49.913 ] 00:26:49.913 } 00:26:49.913 [2024-10-07 05:45:53.670619] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:49.913 [2024-10-07 05:45:53.670821] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177130 ] 00:26:49.913 [2024-10-07 05:45:53.832597] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.171 [2024-10-07 05:45:54.031136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.364  Copying: 60/60 [kB] (average 14 MBps) 00:26:51.364 00:26:51.364 05:45:55 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:26:51.364 05:45:55 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:51.364 05:45:55 -- dd/common.sh@31 -- # xtrace_disable 00:26:51.364 05:45:55 -- common/autotest_common.sh@10 -- # set +x 00:26:51.622 { 00:26:51.622 "subsystems": [ 00:26:51.622 { 00:26:51.622 "subsystem": "bdev", 00:26:51.622 "config": [ 00:26:51.622 { 00:26:51.622 "params": { 00:26:51.622 "trtype": "pcie", 00:26:51.622 "traddr": "0000:00:06.0", 00:26:51.622 "name": "Nvme0" 00:26:51.622 }, 00:26:51.622 "method": "bdev_nvme_attach_controller" 00:26:51.622 }, 00:26:51.622 { 00:26:51.622 "method": "bdev_wait_for_examine" 00:26:51.622 } 00:26:51.622 ] 00:26:51.622 } 00:26:51.622 ] 00:26:51.622 } 00:26:51.622 [2024-10-07 05:45:55.394217] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:51.623 [2024-10-07 05:45:55.394920] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177165 ] 00:26:51.623 [2024-10-07 05:45:55.561842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.881 [2024-10-07 05:45:55.753192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.514  Copying: 60/60 [kB] (average 19 MBps) 00:26:53.514 00:26:53.514 05:45:57 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:53.514 05:45:57 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:26:53.514 05:45:57 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:53.514 05:45:57 -- dd/common.sh@11 -- # local nvme_ref= 00:26:53.514 05:45:57 -- dd/common.sh@12 -- # local size=61440 00:26:53.514 05:45:57 -- dd/common.sh@14 -- # local bs=1048576 00:26:53.514 05:45:57 -- dd/common.sh@15 -- # local count=1 00:26:53.514 05:45:57 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:53.514 05:45:57 -- dd/common.sh@18 -- # gen_conf 00:26:53.514 05:45:57 -- dd/common.sh@31 -- # xtrace_disable 00:26:53.514 05:45:57 -- common/autotest_common.sh@10 -- # set +x 00:26:53.514 { 00:26:53.514 "subsystems": [ 00:26:53.514 { 00:26:53.514 "subsystem": "bdev", 00:26:53.514 "config": [ 00:26:53.514 { 00:26:53.514 "params": { 00:26:53.514 "trtype": "pcie", 00:26:53.514 "traddr": "0000:00:06.0", 00:26:53.514 "name": "Nvme0" 00:26:53.514 }, 00:26:53.514 "method": "bdev_nvme_attach_controller" 00:26:53.514 }, 00:26:53.514 { 00:26:53.514 "method": "bdev_wait_for_examine" 00:26:53.514 } 00:26:53.514 ] 00:26:53.514 } 00:26:53.514 ] 00:26:53.514 } 00:26:53.514 [2024-10-07 05:45:57.225419] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:53.514 [2024-10-07 05:45:57.225618] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177194 ] 00:26:53.514 [2024-10-07 05:45:57.386549] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.772 [2024-10-07 05:45:57.576660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.965  Copying: 1024/1024 [kB] (average 500 MBps) 00:26:54.965 00:26:54.965 05:45:58 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:54.965 05:45:58 -- dd/basic_rw.sh@23 -- # count=15 00:26:54.965 05:45:58 -- dd/basic_rw.sh@24 -- # count=15 00:26:54.965 05:45:58 -- dd/basic_rw.sh@25 -- # size=61440 00:26:54.965 05:45:58 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:26:54.965 05:45:58 -- dd/common.sh@98 -- # xtrace_disable 00:26:54.965 05:45:58 -- common/autotest_common.sh@10 -- # set +x 00:26:55.534 05:45:59 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:26:55.534 05:45:59 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:55.534 05:45:59 -- dd/common.sh@31 -- # xtrace_disable 00:26:55.534 05:45:59 -- common/autotest_common.sh@10 -- # set +x 00:26:55.534 { 00:26:55.534 "subsystems": [ 00:26:55.534 { 00:26:55.534 "subsystem": "bdev", 00:26:55.534 "config": [ 00:26:55.534 { 00:26:55.534 "params": { 00:26:55.534 "trtype": "pcie", 00:26:55.534 "traddr": "0000:00:06.0", 00:26:55.534 "name": "Nvme0" 00:26:55.534 }, 00:26:55.534 "method": "bdev_nvme_attach_controller" 00:26:55.534 }, 00:26:55.534 { 00:26:55.534 "method": "bdev_wait_for_examine" 00:26:55.534 } 00:26:55.534 ] 00:26:55.534 } 00:26:55.534 ] 00:26:55.534 } 00:26:55.534 [2024-10-07 05:45:59.467578] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:55.534 [2024-10-07 05:45:59.467801] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177226 ] 00:26:55.794 [2024-10-07 05:45:59.639335] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.098 [2024-10-07 05:45:59.843067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.292  Copying: 60/60 [kB] (average 58 MBps) 00:26:57.292 00:26:57.292 05:46:01 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:26:57.292 05:46:01 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:57.292 05:46:01 -- dd/common.sh@31 -- # xtrace_disable 00:26:57.292 05:46:01 -- common/autotest_common.sh@10 -- # set +x 00:26:57.549 { 00:26:57.549 "subsystems": [ 00:26:57.549 { 00:26:57.549 "subsystem": "bdev", 00:26:57.549 "config": [ 00:26:57.549 { 00:26:57.549 "params": { 00:26:57.549 "trtype": "pcie", 00:26:57.549 "traddr": "0000:00:06.0", 00:26:57.549 "name": "Nvme0" 00:26:57.549 }, 00:26:57.549 "method": "bdev_nvme_attach_controller" 00:26:57.549 }, 00:26:57.549 { 00:26:57.549 "method": "bdev_wait_for_examine" 00:26:57.549 } 00:26:57.549 ] 00:26:57.549 } 00:26:57.549 ] 00:26:57.549 } 00:26:57.549 [2024-10-07 05:46:01.311809] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:57.549 [2024-10-07 05:46:01.312011] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177253 ] 00:26:57.549 [2024-10-07 05:46:01.490196] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.808 [2024-10-07 05:46:01.669789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.117  Copying: 60/60 [kB] (average 58 MBps) 00:26:59.117 00:26:59.117 05:46:02 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:59.117 05:46:02 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:26:59.117 05:46:02 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:59.117 05:46:02 -- dd/common.sh@11 -- # local nvme_ref= 00:26:59.117 05:46:02 -- dd/common.sh@12 -- # local size=61440 00:26:59.118 05:46:02 -- dd/common.sh@14 -- # local bs=1048576 00:26:59.118 05:46:02 -- dd/common.sh@15 -- # local count=1 00:26:59.118 05:46:02 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:59.118 05:46:02 -- dd/common.sh@18 -- # gen_conf 00:26:59.118 05:46:02 -- dd/common.sh@31 -- # xtrace_disable 00:26:59.118 05:46:02 -- common/autotest_common.sh@10 -- # set +x 00:26:59.118 { 00:26:59.118 "subsystems": [ 00:26:59.118 { 00:26:59.118 "subsystem": "bdev", 00:26:59.118 "config": [ 00:26:59.118 { 00:26:59.118 "params": { 00:26:59.118 "trtype": "pcie", 00:26:59.118 "traddr": "0000:00:06.0", 00:26:59.118 "name": "Nvme0" 00:26:59.118 }, 00:26:59.118 "method": "bdev_nvme_attach_controller" 00:26:59.118 }, 00:26:59.118 { 00:26:59.118 "method": "bdev_wait_for_examine" 00:26:59.118 } 00:26:59.118 ] 00:26:59.118 } 00:26:59.118 ] 00:26:59.118 } 00:26:59.118 [2024-10-07 05:46:03.042955] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:26:59.118 [2024-10-07 05:46:03.043155] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177287 ] 00:26:59.377 [2024-10-07 05:46:03.211349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.637 [2024-10-07 05:46:03.405404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.833  Copying: 1024/1024 [kB] (average 1000 MBps) 00:27:00.833 00:27:00.833 05:46:04 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:27:00.833 05:46:04 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:00.833 05:46:04 -- dd/basic_rw.sh@23 -- # count=7 00:27:00.833 05:46:04 -- dd/basic_rw.sh@24 -- # count=7 00:27:00.833 05:46:04 -- dd/basic_rw.sh@25 -- # size=57344 00:27:00.833 05:46:04 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:27:00.833 05:46:04 -- dd/common.sh@98 -- # xtrace_disable 00:27:00.833 05:46:04 -- common/autotest_common.sh@10 -- # set +x 00:27:01.401 05:46:05 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:27:01.401 05:46:05 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:01.401 05:46:05 -- dd/common.sh@31 -- # xtrace_disable 00:27:01.401 05:46:05 -- common/autotest_common.sh@10 -- # set +x 00:27:01.401 { 00:27:01.401 "subsystems": [ 00:27:01.401 { 00:27:01.401 "subsystem": "bdev", 00:27:01.401 "config": [ 00:27:01.401 { 00:27:01.401 "params": { 00:27:01.401 "trtype": "pcie", 00:27:01.401 "traddr": "0000:00:06.0", 00:27:01.401 "name": "Nvme0" 00:27:01.401 }, 00:27:01.401 "method": "bdev_nvme_attach_controller" 00:27:01.401 }, 00:27:01.401 { 00:27:01.401 "method": "bdev_wait_for_examine" 00:27:01.401 } 00:27:01.401 ] 00:27:01.401 } 00:27:01.401 ] 00:27:01.401 } 00:27:01.401 [2024-10-07 05:46:05.347898] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:01.401 [2024-10-07 05:46:05.348125] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177320 ] 00:27:01.660 [2024-10-07 05:46:05.515724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.919 [2024-10-07 05:46:05.708665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.114  Copying: 56/56 [kB] (average 27 MBps) 00:27:03.114 00:27:03.114 05:46:07 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:27:03.114 05:46:07 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:03.114 05:46:07 -- dd/common.sh@31 -- # xtrace_disable 00:27:03.114 05:46:07 -- common/autotest_common.sh@10 -- # set +x 00:27:03.114 [2024-10-07 05:46:07.064071] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:03.114 [2024-10-07 05:46:07.064304] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177340 ] 00:27:03.114 { 00:27:03.114 "subsystems": [ 00:27:03.114 { 00:27:03.114 "subsystem": "bdev", 00:27:03.114 "config": [ 00:27:03.114 { 00:27:03.114 "params": { 00:27:03.114 "trtype": "pcie", 00:27:03.114 "traddr": "0000:00:06.0", 00:27:03.114 "name": "Nvme0" 00:27:03.114 }, 00:27:03.114 "method": "bdev_nvme_attach_controller" 00:27:03.114 }, 00:27:03.114 { 00:27:03.114 "method": "bdev_wait_for_examine" 00:27:03.114 } 00:27:03.114 ] 00:27:03.114 } 00:27:03.114 ] 00:27:03.114 } 00:27:03.373 [2024-10-07 05:46:07.215392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.632 [2024-10-07 05:46:07.405619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.826  Copying: 56/56 [kB] (average 27 MBps) 00:27:04.826 00:27:04.826 05:46:08 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:04.826 05:46:08 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:27:04.826 05:46:08 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:04.826 05:46:08 -- dd/common.sh@11 -- # local nvme_ref= 00:27:04.826 05:46:08 -- dd/common.sh@12 -- # local size=57344 00:27:04.826 05:46:08 -- dd/common.sh@14 -- # local bs=1048576 00:27:04.826 05:46:08 -- dd/common.sh@15 -- # local count=1 00:27:04.826 05:46:08 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:04.826 05:46:08 -- dd/common.sh@18 -- # gen_conf 00:27:04.826 05:46:08 -- dd/common.sh@31 -- # xtrace_disable 00:27:04.826 05:46:08 -- common/autotest_common.sh@10 -- # set +x 00:27:05.085 { 00:27:05.085 "subsystems": [ 00:27:05.085 { 00:27:05.085 "subsystem": "bdev", 00:27:05.085 "config": [ 00:27:05.085 { 00:27:05.085 "params": { 00:27:05.085 "trtype": "pcie", 00:27:05.085 "traddr": "0000:00:06.0", 00:27:05.085 "name": "Nvme0" 00:27:05.085 }, 00:27:05.085 "method": "bdev_nvme_attach_controller" 00:27:05.085 }, 00:27:05.085 { 00:27:05.085 "method": "bdev_wait_for_examine" 00:27:05.085 } 00:27:05.085 ] 00:27:05.085 } 00:27:05.085 ] 00:27:05.085 } 00:27:05.085 [2024-10-07 05:46:08.864444] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:05.085 [2024-10-07 05:46:08.864681] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177373 ] 00:27:05.085 [2024-10-07 05:46:09.034815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.344 [2024-10-07 05:46:09.228991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.978  Copying: 1024/1024 [kB] (average 1000 MBps) 00:27:06.978 00:27:06.978 05:46:10 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:06.978 05:46:10 -- dd/basic_rw.sh@23 -- # count=7 00:27:06.978 05:46:10 -- dd/basic_rw.sh@24 -- # count=7 00:27:06.978 05:46:10 -- dd/basic_rw.sh@25 -- # size=57344 00:27:06.978 05:46:10 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:27:06.978 05:46:10 -- dd/common.sh@98 -- # xtrace_disable 00:27:06.978 05:46:10 -- common/autotest_common.sh@10 -- # set +x 00:27:07.238 05:46:11 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:27:07.238 05:46:11 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:07.238 05:46:11 -- dd/common.sh@31 -- # xtrace_disable 00:27:07.238 05:46:11 -- common/autotest_common.sh@10 -- # set +x 00:27:07.238 [2024-10-07 05:46:11.100517] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:07.238 [2024-10-07 05:46:11.101035] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177404 ] 00:27:07.238 { 00:27:07.238 "subsystems": [ 00:27:07.238 { 00:27:07.238 "subsystem": "bdev", 00:27:07.238 "config": [ 00:27:07.238 { 00:27:07.238 "params": { 00:27:07.238 "trtype": "pcie", 00:27:07.238 "traddr": "0000:00:06.0", 00:27:07.238 "name": "Nvme0" 00:27:07.238 }, 00:27:07.238 "method": "bdev_nvme_attach_controller" 00:27:07.238 }, 00:27:07.238 { 00:27:07.238 "method": "bdev_wait_for_examine" 00:27:07.238 } 00:27:07.238 ] 00:27:07.238 } 00:27:07.238 ] 00:27:07.238 } 00:27:07.498 [2024-10-07 05:46:11.254720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.498 [2024-10-07 05:46:11.445558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.001  Copying: 56/56 [kB] (average 54 MBps) 00:27:09.001 00:27:09.001 05:46:12 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:27:09.001 05:46:12 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:09.001 05:46:12 -- dd/common.sh@31 -- # xtrace_disable 00:27:09.001 05:46:12 -- common/autotest_common.sh@10 -- # set +x 00:27:09.001 { 00:27:09.001 "subsystems": [ 00:27:09.001 { 00:27:09.001 "subsystem": "bdev", 00:27:09.001 "config": [ 00:27:09.001 { 00:27:09.001 "params": { 00:27:09.001 "trtype": "pcie", 00:27:09.001 "traddr": "0000:00:06.0", 00:27:09.001 "name": "Nvme0" 00:27:09.001 }, 00:27:09.001 "method": "bdev_nvme_attach_controller" 00:27:09.001 }, 00:27:09.001 { 00:27:09.001 "method": "bdev_wait_for_examine" 00:27:09.001 } 00:27:09.001 ] 00:27:09.001 } 00:27:09.001 ] 00:27:09.001 } 00:27:09.001 [2024-10-07 05:46:12.901146] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:09.001 [2024-10-07 05:46:12.901328] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177434 ] 00:27:09.260 [2024-10-07 05:46:13.065830] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.518 [2024-10-07 05:46:13.255032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.711  Copying: 56/56 [kB] (average 54 MBps) 00:27:10.711 00:27:10.711 05:46:14 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:10.711 05:46:14 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:27:10.711 05:46:14 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:10.711 05:46:14 -- dd/common.sh@11 -- # local nvme_ref= 00:27:10.711 05:46:14 -- dd/common.sh@12 -- # local size=57344 00:27:10.711 05:46:14 -- dd/common.sh@14 -- # local bs=1048576 00:27:10.711 05:46:14 -- dd/common.sh@15 -- # local count=1 00:27:10.711 05:46:14 -- dd/common.sh@18 -- # gen_conf 00:27:10.711 05:46:14 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:10.711 05:46:14 -- dd/common.sh@31 -- # xtrace_disable 00:27:10.711 05:46:14 -- common/autotest_common.sh@10 -- # set +x 00:27:10.711 { 00:27:10.711 "subsystems": [ 00:27:10.711 { 00:27:10.711 "subsystem": "bdev", 00:27:10.711 "config": [ 00:27:10.711 { 00:27:10.711 "params": { 00:27:10.711 "trtype": "pcie", 00:27:10.711 "traddr": "0000:00:06.0", 00:27:10.711 "name": "Nvme0" 00:27:10.711 }, 00:27:10.711 "method": "bdev_nvme_attach_controller" 00:27:10.711 }, 00:27:10.711 { 00:27:10.711 "method": "bdev_wait_for_examine" 00:27:10.711 } 00:27:10.711 ] 00:27:10.711 } 00:27:10.711 ] 00:27:10.711 } 00:27:10.711 [2024-10-07 05:46:14.628880] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:10.711 [2024-10-07 05:46:14.629089] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177466 ] 00:27:10.970 [2024-10-07 05:46:14.797817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.229 [2024-10-07 05:46:14.978455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.423  Copying: 1024/1024 [kB] (average 1000 MBps) 00:27:12.423 00:27:12.423 05:46:16 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:27:12.423 05:46:16 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:12.423 05:46:16 -- dd/basic_rw.sh@23 -- # count=3 00:27:12.423 05:46:16 -- dd/basic_rw.sh@24 -- # count=3 00:27:12.423 05:46:16 -- dd/basic_rw.sh@25 -- # size=49152 00:27:12.423 05:46:16 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:27:12.423 05:46:16 -- dd/common.sh@98 -- # xtrace_disable 00:27:12.423 05:46:16 -- common/autotest_common.sh@10 -- # set +x 00:27:12.991 05:46:16 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:27:12.991 05:46:16 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:12.991 05:46:16 -- dd/common.sh@31 -- # xtrace_disable 00:27:12.991 05:46:16 -- common/autotest_common.sh@10 -- # set +x 00:27:12.991 [2024-10-07 05:46:16.841141] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:12.991 [2024-10-07 05:46:16.841320] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177494 ] 00:27:12.991 { 00:27:12.991 "subsystems": [ 00:27:12.991 { 00:27:12.991 "subsystem": "bdev", 00:27:12.991 "config": [ 00:27:12.991 { 00:27:12.991 "params": { 00:27:12.991 "trtype": "pcie", 00:27:12.991 "traddr": "0000:00:06.0", 00:27:12.991 "name": "Nvme0" 00:27:12.991 }, 00:27:12.991 "method": "bdev_nvme_attach_controller" 00:27:12.991 }, 00:27:12.991 { 00:27:12.991 "method": "bdev_wait_for_examine" 00:27:12.991 } 00:27:12.991 ] 00:27:12.991 } 00:27:12.991 ] 00:27:12.991 } 00:27:13.250 [2024-10-07 05:46:16.991668] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.250 [2024-10-07 05:46:17.175364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.751  Copying: 48/48 [kB] (average 46 MBps) 00:27:14.751 00:27:14.751 05:46:18 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:27:14.751 05:46:18 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:14.751 05:46:18 -- dd/common.sh@31 -- # xtrace_disable 00:27:14.751 05:46:18 -- common/autotest_common.sh@10 -- # set +x 00:27:14.751 { 00:27:14.751 "subsystems": [ 00:27:14.751 { 00:27:14.751 "subsystem": "bdev", 00:27:14.751 "config": [ 00:27:14.751 { 00:27:14.751 "params": { 00:27:14.751 "trtype": "pcie", 00:27:14.751 "traddr": "0000:00:06.0", 00:27:14.751 "name": "Nvme0" 00:27:14.751 }, 00:27:14.751 "method": "bdev_nvme_attach_controller" 00:27:14.751 }, 00:27:14.751 { 00:27:14.751 "method": "bdev_wait_for_examine" 00:27:14.751 } 00:27:14.751 ] 00:27:14.751 } 00:27:14.751 ] 00:27:14.751 } 00:27:14.751 [2024-10-07 05:46:18.537898] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:14.751 [2024-10-07 05:46:18.538104] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177526 ] 00:27:14.751 [2024-10-07 05:46:18.707281] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.010 [2024-10-07 05:46:18.902371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.512  Copying: 48/48 [kB] (average 46 MBps) 00:27:16.512 00:27:16.512 05:46:20 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:16.512 05:46:20 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:27:16.512 05:46:20 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:16.512 05:46:20 -- dd/common.sh@11 -- # local nvme_ref= 00:27:16.512 05:46:20 -- dd/common.sh@12 -- # local size=49152 00:27:16.512 05:46:20 -- dd/common.sh@14 -- # local bs=1048576 00:27:16.512 05:46:20 -- dd/common.sh@15 -- # local count=1 00:27:16.512 05:46:20 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:16.512 05:46:20 -- dd/common.sh@18 -- # gen_conf 00:27:16.512 05:46:20 -- dd/common.sh@31 -- # xtrace_disable 00:27:16.512 05:46:20 -- common/autotest_common.sh@10 -- # set +x 00:27:16.512 [2024-10-07 05:46:20.339634] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:16.512 [2024-10-07 05:46:20.339801] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177553 ] 00:27:16.512 { 00:27:16.512 "subsystems": [ 00:27:16.512 { 00:27:16.512 "subsystem": "bdev", 00:27:16.512 "config": [ 00:27:16.512 { 00:27:16.512 "params": { 00:27:16.512 "trtype": "pcie", 00:27:16.512 "traddr": "0000:00:06.0", 00:27:16.512 "name": "Nvme0" 00:27:16.512 }, 00:27:16.512 "method": "bdev_nvme_attach_controller" 00:27:16.512 }, 00:27:16.512 { 00:27:16.512 "method": "bdev_wait_for_examine" 00:27:16.512 } 00:27:16.512 ] 00:27:16.512 } 00:27:16.512 ] 00:27:16.512 } 00:27:16.771 [2024-10-07 05:46:20.492312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.771 [2024-10-07 05:46:20.679101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.274  Copying: 1024/1024 [kB] (average 500 MBps) 00:27:18.274 00:27:18.274 05:46:22 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:18.274 05:46:22 -- dd/basic_rw.sh@23 -- # count=3 00:27:18.274 05:46:22 -- dd/basic_rw.sh@24 -- # count=3 00:27:18.274 05:46:22 -- dd/basic_rw.sh@25 -- # size=49152 00:27:18.274 05:46:22 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:27:18.274 05:46:22 -- dd/common.sh@98 -- # xtrace_disable 00:27:18.274 05:46:22 -- common/autotest_common.sh@10 -- # set +x 00:27:18.533 05:46:22 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:27:18.533 05:46:22 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:18.533 05:46:22 -- dd/common.sh@31 -- # xtrace_disable 00:27:18.533 05:46:22 -- common/autotest_common.sh@10 -- # set +x 00:27:18.792 { 00:27:18.792 "subsystems": [ 00:27:18.792 { 00:27:18.792 "subsystem": "bdev", 00:27:18.792 "config": [ 00:27:18.792 { 00:27:18.792 "params": { 00:27:18.792 "trtype": "pcie", 00:27:18.792 "traddr": "0000:00:06.0", 00:27:18.792 "name": "Nvme0" 00:27:18.792 }, 00:27:18.792 "method": "bdev_nvme_attach_controller" 00:27:18.792 }, 00:27:18.792 { 00:27:18.792 "method": "bdev_wait_for_examine" 00:27:18.792 } 00:27:18.792 ] 00:27:18.792 } 00:27:18.792 ] 00:27:18.792 } 00:27:18.792 [2024-10-07 05:46:22.552119] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:18.792 [2024-10-07 05:46:22.552324] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177588 ] 00:27:18.792 [2024-10-07 05:46:22.719351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.051 [2024-10-07 05:46:22.914499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.688  Copying: 48/48 [kB] (average 46 MBps) 00:27:20.688 00:27:20.688 05:46:24 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:27:20.688 05:46:24 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:20.688 05:46:24 -- dd/common.sh@31 -- # xtrace_disable 00:27:20.688 05:46:24 -- common/autotest_common.sh@10 -- # set +x 00:27:20.688 { 00:27:20.688 "subsystems": [ 00:27:20.688 { 00:27:20.688 "subsystem": "bdev", 00:27:20.688 "config": [ 00:27:20.688 { 00:27:20.688 "params": { 00:27:20.688 "trtype": "pcie", 00:27:20.688 "traddr": "0000:00:06.0", 00:27:20.688 "name": "Nvme0" 00:27:20.688 }, 00:27:20.688 "method": "bdev_nvme_attach_controller" 00:27:20.688 }, 00:27:20.688 { 00:27:20.688 "method": "bdev_wait_for_examine" 00:27:20.688 } 00:27:20.688 ] 00:27:20.688 } 00:27:20.688 ] 00:27:20.688 } 00:27:20.688 [2024-10-07 05:46:24.380854] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:20.688 [2024-10-07 05:46:24.381052] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177612 ] 00:27:20.688 [2024-10-07 05:46:24.550799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.947 [2024-10-07 05:46:24.735159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.143  Copying: 48/48 [kB] (average 46 MBps) 00:27:22.143 00:27:22.402 05:46:26 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:22.402 05:46:26 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:27:22.402 05:46:26 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:22.402 05:46:26 -- dd/common.sh@11 -- # local nvme_ref= 00:27:22.402 05:46:26 -- dd/common.sh@12 -- # local size=49152 00:27:22.402 05:46:26 -- dd/common.sh@14 -- # local bs=1048576 00:27:22.402 05:46:26 -- dd/common.sh@15 -- # local count=1 00:27:22.402 05:46:26 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:22.402 05:46:26 -- dd/common.sh@18 -- # gen_conf 00:27:22.402 05:46:26 -- dd/common.sh@31 -- # xtrace_disable 00:27:22.402 05:46:26 -- common/autotest_common.sh@10 -- # set +x 00:27:22.402 { 00:27:22.402 "subsystems": [ 00:27:22.402 { 00:27:22.402 "subsystem": "bdev", 00:27:22.402 "config": [ 00:27:22.402 { 00:27:22.402 "params": { 00:27:22.402 "trtype": "pcie", 00:27:22.402 "traddr": "0000:00:06.0", 00:27:22.402 "name": "Nvme0" 00:27:22.402 }, 00:27:22.402 "method": "bdev_nvme_attach_controller" 00:27:22.402 }, 00:27:22.402 { 00:27:22.402 "method": "bdev_wait_for_examine" 00:27:22.402 } 00:27:22.402 ] 00:27:22.402 } 00:27:22.402 ] 00:27:22.402 } 00:27:22.402 [2024-10-07 05:46:26.205323] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:22.402 [2024-10-07 05:46:26.205524] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177641 ] 00:27:22.402 [2024-10-07 05:46:26.375224] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.662 [2024-10-07 05:46:26.575680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.166  Copying: 1024/1024 [kB] (average 1000 MBps) 00:27:24.166 00:27:24.166 ************************************ 00:27:24.166 END TEST dd_rw 00:27:24.166 ************************************ 00:27:24.166 00:27:24.166 real 0m34.891s 00:27:24.166 user 0m28.438s 00:27:24.166 sys 0m5.144s 00:27:24.166 05:46:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:24.166 05:46:27 -- common/autotest_common.sh@10 -- # set +x 00:27:24.166 05:46:28 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:27:24.166 05:46:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:24.166 05:46:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:24.166 05:46:28 -- common/autotest_common.sh@10 -- # set +x 00:27:24.166 ************************************ 00:27:24.166 START TEST dd_rw_offset 00:27:24.166 ************************************ 00:27:24.166 05:46:28 -- common/autotest_common.sh@1104 -- # basic_offset 00:27:24.166 05:46:28 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:27:24.166 05:46:28 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:27:24.166 05:46:28 -- dd/common.sh@98 -- # xtrace_disable 00:27:24.166 05:46:28 -- common/autotest_common.sh@10 -- # set +x 00:27:24.166 05:46:28 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:27:24.166 05:46:28 -- dd/basic_rw.sh@56 -- # data=egzohsg8si9wvlhqzii6khdr8ty9wuy8fwphk7eb26oy5rx2mdymwswopblwuk9pecurp5hpz6ojl1ziaa5kvk5xyb7dwwepd4te13r22k07cfas1t8dfxqnd38go5e2w1ojp28u53no9zawebblhbdpw3kng8zysj0qatleve5ex8xp0ppj59gdlnp9kbon0602zhkq6rhokte0cewqoxwtxi861u92oqqsvoubpuovf20vvy6yzpgwnepcflpnqkqgy14t14cbcn1oxqqy2we2mlrxqs927k4io32cchcm43cpdcopdsxxpt9o6h3q9woar9y3x7w4x7dlthnrwzh629ov3xz2nueeb06t3n00w426jkb5tw9rlwdb6pxqa4p8g2s9qakpctlrvlxupo9ao2r9tqhkscrwya205d0vmg8xzwre7ev53v15c41saaq4zi6qfm09jnbhftw42e7ly4yh6m7siv7r3z342eh8iudw5c0tcl7563ivp6g0m3sqw3mdcdx8j8tje9egs31s7twfq2htuj845kjepz3y7ppoqhvamea4lp7186zoxnzoubf5cdtzoy4ehjgxjsnw6pm9tzjc1z3smw42k6ab5t211ojwybft5fowbbw9ptsgqjz0ybh8z991u7ftr8wltknw574tdhw1b7r6iq8z0sl7cnvibq8mq5yw9s3ro2bzm24qfdwkzx6um3pson4sjpelf5s2cepbft4x8g4mst0hgsbfuskeojajrs0vq20jy96mwirn4z0jq655k07bklgvqgfccfooaqxrd80vwzitfx5vf14od90wmec4ngrgs8hc4e3drofnhae1dkp9y8cnncq7pqk085f7uu10mfhejpbfz416kis50o9nrcn7wsxbzjg0aerpx5kxi4fvnhudw8pb763nwfsajp8dcgtznapplyhexpwc871zezpv4i3yssledf5cgm0d527kqm75dzrfsfeyvocqlrxgt4nc2v7xokh86qdasfqw36m9g5hu71gjzdxb9i0ympk9ql27tal6s2vdq08o2axkf54ojaiz9arekdylxnn2ku8l6eyw5h7gyrl7doz5pr3kufnkg1v3eqkdrjpbmealil1w28s4q8z6ib8atltj4ofzjncrckym7rsx3xsp1f9twaeiuujg2zroydb1ieqo8c4jbv5i9uppljckcam7kzoxt36jsa9mw1hr2om15l65pake9ekzqa5ymfxzd8asl5b20vs7y72zwxes3f8ie6pxnx01hdkf33new0bdzxk91pgm2d18mcr8n4bfw7s3rf37fazylxm0876wsf4rgf8j72of2s4byechh955g7wim7jzrk9d2gf6xsdpecard6pj230ri24p36wqfms4rbcj14icdhynpkf67oibwfkz9dh8cbu8gmcx4ngt80sewrzoyp93i3yyzyo948pt2f8x8qcy51ntzobcseex73biani2ss2463efjbm2pxzadvxl5nvol87gjvxuzy931hjwp4buo3at5r2mdhfnx1d4qo0lw7lyfgy0brufrda5p0v43hxemqgu82wike8ap7tt7de88jmio254aa8p972shrgy6l91456020ry8gsgpv6fs3p1pbedbbq949ah8nv7efcb29xifjq3anjtp85qfs4p2lrzohepzd74fy6d7vcvuudxm0b8h9ck309qzk215mubfamch0kd77u58620pybhm10kyibdu3w3jzw1tjmjf40gkwek73vpvl1wc8p2rhic9pityubn1ehv5fbqz4icv9j0w4zysayby1vk7xko0uu2dmiuunj3nxhzgs5l0fs06dn66p15gk2bx5irzppo2n5tjr1h2c4hw0gmyw7ii7hcflzw0orkt7xquur2mx9j4zo4brtb9f27qmgft8l2ektzlp4w5sx7zwclzlt09vudkclo0ulykuwqf23dbh0ylffrerkaaa3k6jks84zi6je2gpjh3ds3ikpx90gcgir05ibsrjkslxv1nyrr9ahey2yhsahbl0nyzsq34p2bzij6layyb0888mzz05uo9pglfsl1ep1dxb899fsxte5a3scz1i47hp5bczc45qiu96kp2yt4jhvbyydtwwbz77zfbcj1xkg27tyrrzfb5b7f6t1i6puu1amhbg0txttp853ijay4ceaijtckwtoumxina26b363iv3rwsf8c9v55drfe3v4g0ztq6chq0bfaf58a93z6gjts1dcnyfkxnc24bwlhhnhyfscfs5vte1x5qsngjbiqt01j5kh9y890ggf4017tubmos7ve3xkw40sh683cfzh9ess1ua00d00hchuz2g7kv7al5j8zs7wol6ugp0q4n28det7zjt7ya1nikyt9a1oskmtucf29fj1eouuv26d5b40m4cgpaozko8ix2lt9sl0ow277j5xn710xex3s0orjgwu7c91sxynb1pma9dndxt26lvlblk0jvrxmz1bhmkgmlvu1ul2mack7get8mds873ixtapzlb3u2qj3pu8ggqscr9efnzlrut0v3jni8rh2vkux8mi80ldef9wuebqvjezmzl2ovkuscl7x2ltv6rc00a3ovj9byjpk5isdndfqj6nlyjttdafqxstnw9twlljjnbnoszjh4xkgkgg3w3m6etd14m7wcozdr46s8ptm2tdntmwes73j4okmrorisbn6aobbs6q274seutxl2ffdsl6yxkt4rbc3r1c7oye2ch351imib6ccdt4epkb6h5kv5w92id5a09ppne85hj1xgp5cvcvg5lplfyx41eyw8x3cux4cv36i7c00drewlakrtuub9137lwet9fatrvpt573i6ckiisaehwaqoertbjtb8ld6fav8u1yta7bvu0ma4g97cb4di3p0jug4pvw8fhb40194z1155ca1ivwwc0i7vec2snpvhq4f9clfl6buwa81zwjdfj5upj6rnfaqrbhoxh2j9a2mgxmfzsfhab33la9orsfwr1q79m3t6ctsh81aqviaflsyxarzazyib127h9qdn29brmufqc72quzeps90t1m7lpdrozku42dq999k7fvc34pk6ij6t0pp91wfizvn7wcutw1oighfezlq5dnz8mmb091zmyamvvnrucm7ct977egy9jqokk5umxx6bkmz7p0f6sj7datrnrc5hvmhq5k8saavoz4ajttoejwrsxex42wqj2i8u0wgqcgis80gzqgtye7jivrsl3icuioo32juyjboumqbrqnhz15680j8kirqfveji3yefrn8hv2mniy4599b84hnvme0tll5znwii890xbbkv831f1d08gf354xu8cnaejnu7toth50tau7pnz5zb9ab15fnwqqfcvskrvageyxbr81ujofhkhsriy7epehjl5v3pfdas6eeavj25wu3s7vor6d4dl84zkybhxffs52ef7bcclu4eo8lb6xpu0xk4zvwbh1brx1bi6onqo2pgf993rfq7m3ijj7y18nznkh5mb3eeytxojrup7m026u9iz9h32v0fxf0haxz8k2zwjv3no8jomzoaa71gtkicu1zu1z4kvmt8nk9s42jte1fcepzfnpyv0v1zm68hxy3jqlxgjfeyg51x8uz2bz9iberla77chxfrdkxbzmqbcvtkge81q7xm2k8huyezz1ro6hurw2rdn59ovajmjcepn9pl2sxa2mnqiv88ykcn77jnxq6ncxfo8rd5cq3x1vtfcn7zk6u0zity2j6cjohsv8gkizs3hcj0gxv2tau5v04eamc3qcldijm7lmrm1pjmoo6e79j62n5drspvr6wxphbna7t7kudh9dyzw8ie2wsw0whzy0mjmq37ev9uv094cjd8jsydu3a4lr318ikb9ckltawu7wj15e10wbglpafue2iopqfvzakg33al8a26bns8zy5f5nyadllirpnoex280pw1rtrv64gphnr6jpbkh6ew9iy6n4818jt0ha5yu2vmjvo0mnqyq4uzdqum593jtpu 00:27:24.166 05:46:28 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:27:24.166 05:46:28 -- dd/basic_rw.sh@59 -- # gen_conf 00:27:24.166 05:46:28 -- dd/common.sh@31 -- # xtrace_disable 00:27:24.166 05:46:28 -- common/autotest_common.sh@10 -- # set +x 00:27:24.166 [2024-10-07 05:46:28.128186] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:24.166 [2024-10-07 05:46:28.128326] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177694 ] 00:27:24.166 { 00:27:24.166 "subsystems": [ 00:27:24.166 { 00:27:24.166 "subsystem": "bdev", 00:27:24.166 "config": [ 00:27:24.166 { 00:27:24.166 "params": { 00:27:24.166 "trtype": "pcie", 00:27:24.166 "traddr": "0000:00:06.0", 00:27:24.166 "name": "Nvme0" 00:27:24.166 }, 00:27:24.166 "method": "bdev_nvme_attach_controller" 00:27:24.166 }, 00:27:24.166 { 00:27:24.166 "method": "bdev_wait_for_examine" 00:27:24.166 } 00:27:24.166 ] 00:27:24.166 } 00:27:24.166 ] 00:27:24.166 } 00:27:24.425 [2024-10-07 05:46:28.283067] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.684 [2024-10-07 05:46:28.470626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.319  Copying: 4096/4096 [B] (average 4000 kBps) 00:27:26.319 00:27:26.319 05:46:29 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:27:26.319 05:46:29 -- dd/basic_rw.sh@65 -- # gen_conf 00:27:26.319 05:46:29 -- dd/common.sh@31 -- # xtrace_disable 00:27:26.319 05:46:29 -- common/autotest_common.sh@10 -- # set +x 00:27:26.319 { 00:27:26.319 "subsystems": [ 00:27:26.319 { 00:27:26.319 "subsystem": "bdev", 00:27:26.319 "config": [ 00:27:26.319 { 00:27:26.319 "params": { 00:27:26.319 "trtype": "pcie", 00:27:26.319 "traddr": "0000:00:06.0", 00:27:26.319 "name": "Nvme0" 00:27:26.319 }, 00:27:26.319 "method": "bdev_nvme_attach_controller" 00:27:26.319 }, 00:27:26.319 { 00:27:26.319 "method": "bdev_wait_for_examine" 00:27:26.319 } 00:27:26.319 ] 00:27:26.319 } 00:27:26.319 ] 00:27:26.319 } 00:27:26.319 [2024-10-07 05:46:29.943114] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:26.319 [2024-10-07 05:46:29.943314] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177724 ] 00:27:26.319 [2024-10-07 05:46:30.110938] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.579 [2024-10-07 05:46:30.299787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.809  Copying: 4096/4096 [B] (average 4000 kBps) 00:27:27.809 00:27:27.809 05:46:31 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:27:27.810 05:46:31 -- dd/basic_rw.sh@72 -- # [[ egzohsg8si9wvlhqzii6khdr8ty9wuy8fwphk7eb26oy5rx2mdymwswopblwuk9pecurp5hpz6ojl1ziaa5kvk5xyb7dwwepd4te13r22k07cfas1t8dfxqnd38go5e2w1ojp28u53no9zawebblhbdpw3kng8zysj0qatleve5ex8xp0ppj59gdlnp9kbon0602zhkq6rhokte0cewqoxwtxi861u92oqqsvoubpuovf20vvy6yzpgwnepcflpnqkqgy14t14cbcn1oxqqy2we2mlrxqs927k4io32cchcm43cpdcopdsxxpt9o6h3q9woar9y3x7w4x7dlthnrwzh629ov3xz2nueeb06t3n00w426jkb5tw9rlwdb6pxqa4p8g2s9qakpctlrvlxupo9ao2r9tqhkscrwya205d0vmg8xzwre7ev53v15c41saaq4zi6qfm09jnbhftw42e7ly4yh6m7siv7r3z342eh8iudw5c0tcl7563ivp6g0m3sqw3mdcdx8j8tje9egs31s7twfq2htuj845kjepz3y7ppoqhvamea4lp7186zoxnzoubf5cdtzoy4ehjgxjsnw6pm9tzjc1z3smw42k6ab5t211ojwybft5fowbbw9ptsgqjz0ybh8z991u7ftr8wltknw574tdhw1b7r6iq8z0sl7cnvibq8mq5yw9s3ro2bzm24qfdwkzx6um3pson4sjpelf5s2cepbft4x8g4mst0hgsbfuskeojajrs0vq20jy96mwirn4z0jq655k07bklgvqgfccfooaqxrd80vwzitfx5vf14od90wmec4ngrgs8hc4e3drofnhae1dkp9y8cnncq7pqk085f7uu10mfhejpbfz416kis50o9nrcn7wsxbzjg0aerpx5kxi4fvnhudw8pb763nwfsajp8dcgtznapplyhexpwc871zezpv4i3yssledf5cgm0d527kqm75dzrfsfeyvocqlrxgt4nc2v7xokh86qdasfqw36m9g5hu71gjzdxb9i0ympk9ql27tal6s2vdq08o2axkf54ojaiz9arekdylxnn2ku8l6eyw5h7gyrl7doz5pr3kufnkg1v3eqkdrjpbmealil1w28s4q8z6ib8atltj4ofzjncrckym7rsx3xsp1f9twaeiuujg2zroydb1ieqo8c4jbv5i9uppljckcam7kzoxt36jsa9mw1hr2om15l65pake9ekzqa5ymfxzd8asl5b20vs7y72zwxes3f8ie6pxnx01hdkf33new0bdzxk91pgm2d18mcr8n4bfw7s3rf37fazylxm0876wsf4rgf8j72of2s4byechh955g7wim7jzrk9d2gf6xsdpecard6pj230ri24p36wqfms4rbcj14icdhynpkf67oibwfkz9dh8cbu8gmcx4ngt80sewrzoyp93i3yyzyo948pt2f8x8qcy51ntzobcseex73biani2ss2463efjbm2pxzadvxl5nvol87gjvxuzy931hjwp4buo3at5r2mdhfnx1d4qo0lw7lyfgy0brufrda5p0v43hxemqgu82wike8ap7tt7de88jmio254aa8p972shrgy6l91456020ry8gsgpv6fs3p1pbedbbq949ah8nv7efcb29xifjq3anjtp85qfs4p2lrzohepzd74fy6d7vcvuudxm0b8h9ck309qzk215mubfamch0kd77u58620pybhm10kyibdu3w3jzw1tjmjf40gkwek73vpvl1wc8p2rhic9pityubn1ehv5fbqz4icv9j0w4zysayby1vk7xko0uu2dmiuunj3nxhzgs5l0fs06dn66p15gk2bx5irzppo2n5tjr1h2c4hw0gmyw7ii7hcflzw0orkt7xquur2mx9j4zo4brtb9f27qmgft8l2ektzlp4w5sx7zwclzlt09vudkclo0ulykuwqf23dbh0ylffrerkaaa3k6jks84zi6je2gpjh3ds3ikpx90gcgir05ibsrjkslxv1nyrr9ahey2yhsahbl0nyzsq34p2bzij6layyb0888mzz05uo9pglfsl1ep1dxb899fsxte5a3scz1i47hp5bczc45qiu96kp2yt4jhvbyydtwwbz77zfbcj1xkg27tyrrzfb5b7f6t1i6puu1amhbg0txttp853ijay4ceaijtckwtoumxina26b363iv3rwsf8c9v55drfe3v4g0ztq6chq0bfaf58a93z6gjts1dcnyfkxnc24bwlhhnhyfscfs5vte1x5qsngjbiqt01j5kh9y890ggf4017tubmos7ve3xkw40sh683cfzh9ess1ua00d00hchuz2g7kv7al5j8zs7wol6ugp0q4n28det7zjt7ya1nikyt9a1oskmtucf29fj1eouuv26d5b40m4cgpaozko8ix2lt9sl0ow277j5xn710xex3s0orjgwu7c91sxynb1pma9dndxt26lvlblk0jvrxmz1bhmkgmlvu1ul2mack7get8mds873ixtapzlb3u2qj3pu8ggqscr9efnzlrut0v3jni8rh2vkux8mi80ldef9wuebqvjezmzl2ovkuscl7x2ltv6rc00a3ovj9byjpk5isdndfqj6nlyjttdafqxstnw9twlljjnbnoszjh4xkgkgg3w3m6etd14m7wcozdr46s8ptm2tdntmwes73j4okmrorisbn6aobbs6q274seutxl2ffdsl6yxkt4rbc3r1c7oye2ch351imib6ccdt4epkb6h5kv5w92id5a09ppne85hj1xgp5cvcvg5lplfyx41eyw8x3cux4cv36i7c00drewlakrtuub9137lwet9fatrvpt573i6ckiisaehwaqoertbjtb8ld6fav8u1yta7bvu0ma4g97cb4di3p0jug4pvw8fhb40194z1155ca1ivwwc0i7vec2snpvhq4f9clfl6buwa81zwjdfj5upj6rnfaqrbhoxh2j9a2mgxmfzsfhab33la9orsfwr1q79m3t6ctsh81aqviaflsyxarzazyib127h9qdn29brmufqc72quzeps90t1m7lpdrozku42dq999k7fvc34pk6ij6t0pp91wfizvn7wcutw1oighfezlq5dnz8mmb091zmyamvvnrucm7ct977egy9jqokk5umxx6bkmz7p0f6sj7datrnrc5hvmhq5k8saavoz4ajttoejwrsxex42wqj2i8u0wgqcgis80gzqgtye7jivrsl3icuioo32juyjboumqbrqnhz15680j8kirqfveji3yefrn8hv2mniy4599b84hnvme0tll5znwii890xbbkv831f1d08gf354xu8cnaejnu7toth50tau7pnz5zb9ab15fnwqqfcvskrvageyxbr81ujofhkhsriy7epehjl5v3pfdas6eeavj25wu3s7vor6d4dl84zkybhxffs52ef7bcclu4eo8lb6xpu0xk4zvwbh1brx1bi6onqo2pgf993rfq7m3ijj7y18nznkh5mb3eeytxojrup7m026u9iz9h32v0fxf0haxz8k2zwjv3no8jomzoaa71gtkicu1zu1z4kvmt8nk9s42jte1fcepzfnpyv0v1zm68hxy3jqlxgjfeyg51x8uz2bz9iberla77chxfrdkxbzmqbcvtkge81q7xm2k8huyezz1ro6hurw2rdn59ovajmjcepn9pl2sxa2mnqiv88ykcn77jnxq6ncxfo8rd5cq3x1vtfcn7zk6u0zity2j6cjohsv8gkizs3hcj0gxv2tau5v04eamc3qcldijm7lmrm1pjmoo6e79j62n5drspvr6wxphbna7t7kudh9dyzw8ie2wsw0whzy0mjmq37ev9uv094cjd8jsydu3a4lr318ikb9ckltawu7wj15e10wbglpafue2iopqfvzakg33al8a26bns8zy5f5nyadllirpnoex280pw1rtrv64gphnr6jpbkh6ew9iy6n4818jt0ha5yu2vmjvo0mnqyq4uzdqum593jtpu == \e\g\z\o\h\s\g\8\s\i\9\w\v\l\h\q\z\i\i\6\k\h\d\r\8\t\y\9\w\u\y\8\f\w\p\h\k\7\e\b\2\6\o\y\5\r\x\2\m\d\y\m\w\s\w\o\p\b\l\w\u\k\9\p\e\c\u\r\p\5\h\p\z\6\o\j\l\1\z\i\a\a\5\k\v\k\5\x\y\b\7\d\w\w\e\p\d\4\t\e\1\3\r\2\2\k\0\7\c\f\a\s\1\t\8\d\f\x\q\n\d\3\8\g\o\5\e\2\w\1\o\j\p\2\8\u\5\3\n\o\9\z\a\w\e\b\b\l\h\b\d\p\w\3\k\n\g\8\z\y\s\j\0\q\a\t\l\e\v\e\5\e\x\8\x\p\0\p\p\j\5\9\g\d\l\n\p\9\k\b\o\n\0\6\0\2\z\h\k\q\6\r\h\o\k\t\e\0\c\e\w\q\o\x\w\t\x\i\8\6\1\u\9\2\o\q\q\s\v\o\u\b\p\u\o\v\f\2\0\v\v\y\6\y\z\p\g\w\n\e\p\c\f\l\p\n\q\k\q\g\y\1\4\t\1\4\c\b\c\n\1\o\x\q\q\y\2\w\e\2\m\l\r\x\q\s\9\2\7\k\4\i\o\3\2\c\c\h\c\m\4\3\c\p\d\c\o\p\d\s\x\x\p\t\9\o\6\h\3\q\9\w\o\a\r\9\y\3\x\7\w\4\x\7\d\l\t\h\n\r\w\z\h\6\2\9\o\v\3\x\z\2\n\u\e\e\b\0\6\t\3\n\0\0\w\4\2\6\j\k\b\5\t\w\9\r\l\w\d\b\6\p\x\q\a\4\p\8\g\2\s\9\q\a\k\p\c\t\l\r\v\l\x\u\p\o\9\a\o\2\r\9\t\q\h\k\s\c\r\w\y\a\2\0\5\d\0\v\m\g\8\x\z\w\r\e\7\e\v\5\3\v\1\5\c\4\1\s\a\a\q\4\z\i\6\q\f\m\0\9\j\n\b\h\f\t\w\4\2\e\7\l\y\4\y\h\6\m\7\s\i\v\7\r\3\z\3\4\2\e\h\8\i\u\d\w\5\c\0\t\c\l\7\5\6\3\i\v\p\6\g\0\m\3\s\q\w\3\m\d\c\d\x\8\j\8\t\j\e\9\e\g\s\3\1\s\7\t\w\f\q\2\h\t\u\j\8\4\5\k\j\e\p\z\3\y\7\p\p\o\q\h\v\a\m\e\a\4\l\p\7\1\8\6\z\o\x\n\z\o\u\b\f\5\c\d\t\z\o\y\4\e\h\j\g\x\j\s\n\w\6\p\m\9\t\z\j\c\1\z\3\s\m\w\4\2\k\6\a\b\5\t\2\1\1\o\j\w\y\b\f\t\5\f\o\w\b\b\w\9\p\t\s\g\q\j\z\0\y\b\h\8\z\9\9\1\u\7\f\t\r\8\w\l\t\k\n\w\5\7\4\t\d\h\w\1\b\7\r\6\i\q\8\z\0\s\l\7\c\n\v\i\b\q\8\m\q\5\y\w\9\s\3\r\o\2\b\z\m\2\4\q\f\d\w\k\z\x\6\u\m\3\p\s\o\n\4\s\j\p\e\l\f\5\s\2\c\e\p\b\f\t\4\x\8\g\4\m\s\t\0\h\g\s\b\f\u\s\k\e\o\j\a\j\r\s\0\v\q\2\0\j\y\9\6\m\w\i\r\n\4\z\0\j\q\6\5\5\k\0\7\b\k\l\g\v\q\g\f\c\c\f\o\o\a\q\x\r\d\8\0\v\w\z\i\t\f\x\5\v\f\1\4\o\d\9\0\w\m\e\c\4\n\g\r\g\s\8\h\c\4\e\3\d\r\o\f\n\h\a\e\1\d\k\p\9\y\8\c\n\n\c\q\7\p\q\k\0\8\5\f\7\u\u\1\0\m\f\h\e\j\p\b\f\z\4\1\6\k\i\s\5\0\o\9\n\r\c\n\7\w\s\x\b\z\j\g\0\a\e\r\p\x\5\k\x\i\4\f\v\n\h\u\d\w\8\p\b\7\6\3\n\w\f\s\a\j\p\8\d\c\g\t\z\n\a\p\p\l\y\h\e\x\p\w\c\8\7\1\z\e\z\p\v\4\i\3\y\s\s\l\e\d\f\5\c\g\m\0\d\5\2\7\k\q\m\7\5\d\z\r\f\s\f\e\y\v\o\c\q\l\r\x\g\t\4\n\c\2\v\7\x\o\k\h\8\6\q\d\a\s\f\q\w\3\6\m\9\g\5\h\u\7\1\g\j\z\d\x\b\9\i\0\y\m\p\k\9\q\l\2\7\t\a\l\6\s\2\v\d\q\0\8\o\2\a\x\k\f\5\4\o\j\a\i\z\9\a\r\e\k\d\y\l\x\n\n\2\k\u\8\l\6\e\y\w\5\h\7\g\y\r\l\7\d\o\z\5\p\r\3\k\u\f\n\k\g\1\v\3\e\q\k\d\r\j\p\b\m\e\a\l\i\l\1\w\2\8\s\4\q\8\z\6\i\b\8\a\t\l\t\j\4\o\f\z\j\n\c\r\c\k\y\m\7\r\s\x\3\x\s\p\1\f\9\t\w\a\e\i\u\u\j\g\2\z\r\o\y\d\b\1\i\e\q\o\8\c\4\j\b\v\5\i\9\u\p\p\l\j\c\k\c\a\m\7\k\z\o\x\t\3\6\j\s\a\9\m\w\1\h\r\2\o\m\1\5\l\6\5\p\a\k\e\9\e\k\z\q\a\5\y\m\f\x\z\d\8\a\s\l\5\b\2\0\v\s\7\y\7\2\z\w\x\e\s\3\f\8\i\e\6\p\x\n\x\0\1\h\d\k\f\3\3\n\e\w\0\b\d\z\x\k\9\1\p\g\m\2\d\1\8\m\c\r\8\n\4\b\f\w\7\s\3\r\f\3\7\f\a\z\y\l\x\m\0\8\7\6\w\s\f\4\r\g\f\8\j\7\2\o\f\2\s\4\b\y\e\c\h\h\9\5\5\g\7\w\i\m\7\j\z\r\k\9\d\2\g\f\6\x\s\d\p\e\c\a\r\d\6\p\j\2\3\0\r\i\2\4\p\3\6\w\q\f\m\s\4\r\b\c\j\1\4\i\c\d\h\y\n\p\k\f\6\7\o\i\b\w\f\k\z\9\d\h\8\c\b\u\8\g\m\c\x\4\n\g\t\8\0\s\e\w\r\z\o\y\p\9\3\i\3\y\y\z\y\o\9\4\8\p\t\2\f\8\x\8\q\c\y\5\1\n\t\z\o\b\c\s\e\e\x\7\3\b\i\a\n\i\2\s\s\2\4\6\3\e\f\j\b\m\2\p\x\z\a\d\v\x\l\5\n\v\o\l\8\7\g\j\v\x\u\z\y\9\3\1\h\j\w\p\4\b\u\o\3\a\t\5\r\2\m\d\h\f\n\x\1\d\4\q\o\0\l\w\7\l\y\f\g\y\0\b\r\u\f\r\d\a\5\p\0\v\4\3\h\x\e\m\q\g\u\8\2\w\i\k\e\8\a\p\7\t\t\7\d\e\8\8\j\m\i\o\2\5\4\a\a\8\p\9\7\2\s\h\r\g\y\6\l\9\1\4\5\6\0\2\0\r\y\8\g\s\g\p\v\6\f\s\3\p\1\p\b\e\d\b\b\q\9\4\9\a\h\8\n\v\7\e\f\c\b\2\9\x\i\f\j\q\3\a\n\j\t\p\8\5\q\f\s\4\p\2\l\r\z\o\h\e\p\z\d\7\4\f\y\6\d\7\v\c\v\u\u\d\x\m\0\b\8\h\9\c\k\3\0\9\q\z\k\2\1\5\m\u\b\f\a\m\c\h\0\k\d\7\7\u\5\8\6\2\0\p\y\b\h\m\1\0\k\y\i\b\d\u\3\w\3\j\z\w\1\t\j\m\j\f\4\0\g\k\w\e\k\7\3\v\p\v\l\1\w\c\8\p\2\r\h\i\c\9\p\i\t\y\u\b\n\1\e\h\v\5\f\b\q\z\4\i\c\v\9\j\0\w\4\z\y\s\a\y\b\y\1\v\k\7\x\k\o\0\u\u\2\d\m\i\u\u\n\j\3\n\x\h\z\g\s\5\l\0\f\s\0\6\d\n\6\6\p\1\5\g\k\2\b\x\5\i\r\z\p\p\o\2\n\5\t\j\r\1\h\2\c\4\h\w\0\g\m\y\w\7\i\i\7\h\c\f\l\z\w\0\o\r\k\t\7\x\q\u\u\r\2\m\x\9\j\4\z\o\4\b\r\t\b\9\f\2\7\q\m\g\f\t\8\l\2\e\k\t\z\l\p\4\w\5\s\x\7\z\w\c\l\z\l\t\0\9\v\u\d\k\c\l\o\0\u\l\y\k\u\w\q\f\2\3\d\b\h\0\y\l\f\f\r\e\r\k\a\a\a\3\k\6\j\k\s\8\4\z\i\6\j\e\2\g\p\j\h\3\d\s\3\i\k\p\x\9\0\g\c\g\i\r\0\5\i\b\s\r\j\k\s\l\x\v\1\n\y\r\r\9\a\h\e\y\2\y\h\s\a\h\b\l\0\n\y\z\s\q\3\4\p\2\b\z\i\j\6\l\a\y\y\b\0\8\8\8\m\z\z\0\5\u\o\9\p\g\l\f\s\l\1\e\p\1\d\x\b\8\9\9\f\s\x\t\e\5\a\3\s\c\z\1\i\4\7\h\p\5\b\c\z\c\4\5\q\i\u\9\6\k\p\2\y\t\4\j\h\v\b\y\y\d\t\w\w\b\z\7\7\z\f\b\c\j\1\x\k\g\2\7\t\y\r\r\z\f\b\5\b\7\f\6\t\1\i\6\p\u\u\1\a\m\h\b\g\0\t\x\t\t\p\8\5\3\i\j\a\y\4\c\e\a\i\j\t\c\k\w\t\o\u\m\x\i\n\a\2\6\b\3\6\3\i\v\3\r\w\s\f\8\c\9\v\5\5\d\r\f\e\3\v\4\g\0\z\t\q\6\c\h\q\0\b\f\a\f\5\8\a\9\3\z\6\g\j\t\s\1\d\c\n\y\f\k\x\n\c\2\4\b\w\l\h\h\n\h\y\f\s\c\f\s\5\v\t\e\1\x\5\q\s\n\g\j\b\i\q\t\0\1\j\5\k\h\9\y\8\9\0\g\g\f\4\0\1\7\t\u\b\m\o\s\7\v\e\3\x\k\w\4\0\s\h\6\8\3\c\f\z\h\9\e\s\s\1\u\a\0\0\d\0\0\h\c\h\u\z\2\g\7\k\v\7\a\l\5\j\8\z\s\7\w\o\l\6\u\g\p\0\q\4\n\2\8\d\e\t\7\z\j\t\7\y\a\1\n\i\k\y\t\9\a\1\o\s\k\m\t\u\c\f\2\9\f\j\1\e\o\u\u\v\2\6\d\5\b\4\0\m\4\c\g\p\a\o\z\k\o\8\i\x\2\l\t\9\s\l\0\o\w\2\7\7\j\5\x\n\7\1\0\x\e\x\3\s\0\o\r\j\g\w\u\7\c\9\1\s\x\y\n\b\1\p\m\a\9\d\n\d\x\t\2\6\l\v\l\b\l\k\0\j\v\r\x\m\z\1\b\h\m\k\g\m\l\v\u\1\u\l\2\m\a\c\k\7\g\e\t\8\m\d\s\8\7\3\i\x\t\a\p\z\l\b\3\u\2\q\j\3\p\u\8\g\g\q\s\c\r\9\e\f\n\z\l\r\u\t\0\v\3\j\n\i\8\r\h\2\v\k\u\x\8\m\i\8\0\l\d\e\f\9\w\u\e\b\q\v\j\e\z\m\z\l\2\o\v\k\u\s\c\l\7\x\2\l\t\v\6\r\c\0\0\a\3\o\v\j\9\b\y\j\p\k\5\i\s\d\n\d\f\q\j\6\n\l\y\j\t\t\d\a\f\q\x\s\t\n\w\9\t\w\l\l\j\j\n\b\n\o\s\z\j\h\4\x\k\g\k\g\g\3\w\3\m\6\e\t\d\1\4\m\7\w\c\o\z\d\r\4\6\s\8\p\t\m\2\t\d\n\t\m\w\e\s\7\3\j\4\o\k\m\r\o\r\i\s\b\n\6\a\o\b\b\s\6\q\2\7\4\s\e\u\t\x\l\2\f\f\d\s\l\6\y\x\k\t\4\r\b\c\3\r\1\c\7\o\y\e\2\c\h\3\5\1\i\m\i\b\6\c\c\d\t\4\e\p\k\b\6\h\5\k\v\5\w\9\2\i\d\5\a\0\9\p\p\n\e\8\5\h\j\1\x\g\p\5\c\v\c\v\g\5\l\p\l\f\y\x\4\1\e\y\w\8\x\3\c\u\x\4\c\v\3\6\i\7\c\0\0\d\r\e\w\l\a\k\r\t\u\u\b\9\1\3\7\l\w\e\t\9\f\a\t\r\v\p\t\5\7\3\i\6\c\k\i\i\s\a\e\h\w\a\q\o\e\r\t\b\j\t\b\8\l\d\6\f\a\v\8\u\1\y\t\a\7\b\v\u\0\m\a\4\g\9\7\c\b\4\d\i\3\p\0\j\u\g\4\p\v\w\8\f\h\b\4\0\1\9\4\z\1\1\5\5\c\a\1\i\v\w\w\c\0\i\7\v\e\c\2\s\n\p\v\h\q\4\f\9\c\l\f\l\6\b\u\w\a\8\1\z\w\j\d\f\j\5\u\p\j\6\r\n\f\a\q\r\b\h\o\x\h\2\j\9\a\2\m\g\x\m\f\z\s\f\h\a\b\3\3\l\a\9\o\r\s\f\w\r\1\q\7\9\m\3\t\6\c\t\s\h\8\1\a\q\v\i\a\f\l\s\y\x\a\r\z\a\z\y\i\b\1\2\7\h\9\q\d\n\2\9\b\r\m\u\f\q\c\7\2\q\u\z\e\p\s\9\0\t\1\m\7\l\p\d\r\o\z\k\u\4\2\d\q\9\9\9\k\7\f\v\c\3\4\p\k\6\i\j\6\t\0\p\p\9\1\w\f\i\z\v\n\7\w\c\u\t\w\1\o\i\g\h\f\e\z\l\q\5\d\n\z\8\m\m\b\0\9\1\z\m\y\a\m\v\v\n\r\u\c\m\7\c\t\9\7\7\e\g\y\9\j\q\o\k\k\5\u\m\x\x\6\b\k\m\z\7\p\0\f\6\s\j\7\d\a\t\r\n\r\c\5\h\v\m\h\q\5\k\8\s\a\a\v\o\z\4\a\j\t\t\o\e\j\w\r\s\x\e\x\4\2\w\q\j\2\i\8\u\0\w\g\q\c\g\i\s\8\0\g\z\q\g\t\y\e\7\j\i\v\r\s\l\3\i\c\u\i\o\o\3\2\j\u\y\j\b\o\u\m\q\b\r\q\n\h\z\1\5\6\8\0\j\8\k\i\r\q\f\v\e\j\i\3\y\e\f\r\n\8\h\v\2\m\n\i\y\4\5\9\9\b\8\4\h\n\v\m\e\0\t\l\l\5\z\n\w\i\i\8\9\0\x\b\b\k\v\8\3\1\f\1\d\0\8\g\f\3\5\4\x\u\8\c\n\a\e\j\n\u\7\t\o\t\h\5\0\t\a\u\7\p\n\z\5\z\b\9\a\b\1\5\f\n\w\q\q\f\c\v\s\k\r\v\a\g\e\y\x\b\r\8\1\u\j\o\f\h\k\h\s\r\i\y\7\e\p\e\h\j\l\5\v\3\p\f\d\a\s\6\e\e\a\v\j\2\5\w\u\3\s\7\v\o\r\6\d\4\d\l\8\4\z\k\y\b\h\x\f\f\s\5\2\e\f\7\b\c\c\l\u\4\e\o\8\l\b\6\x\p\u\0\x\k\4\z\v\w\b\h\1\b\r\x\1\b\i\6\o\n\q\o\2\p\g\f\9\9\3\r\f\q\7\m\3\i\j\j\7\y\1\8\n\z\n\k\h\5\m\b\3\e\e\y\t\x\o\j\r\u\p\7\m\0\2\6\u\9\i\z\9\h\3\2\v\0\f\x\f\0\h\a\x\z\8\k\2\z\w\j\v\3\n\o\8\j\o\m\z\o\a\a\7\1\g\t\k\i\c\u\1\z\u\1\z\4\k\v\m\t\8\n\k\9\s\4\2\j\t\e\1\f\c\e\p\z\f\n\p\y\v\0\v\1\z\m\6\8\h\x\y\3\j\q\l\x\g\j\f\e\y\g\5\1\x\8\u\z\2\b\z\9\i\b\e\r\l\a\7\7\c\h\x\f\r\d\k\x\b\z\m\q\b\c\v\t\k\g\e\8\1\q\7\x\m\2\k\8\h\u\y\e\z\z\1\r\o\6\h\u\r\w\2\r\d\n\5\9\o\v\a\j\m\j\c\e\p\n\9\p\l\2\s\x\a\2\m\n\q\i\v\8\8\y\k\c\n\7\7\j\n\x\q\6\n\c\x\f\o\8\r\d\5\c\q\3\x\1\v\t\f\c\n\7\z\k\6\u\0\z\i\t\y\2\j\6\c\j\o\h\s\v\8\g\k\i\z\s\3\h\c\j\0\g\x\v\2\t\a\u\5\v\0\4\e\a\m\c\3\q\c\l\d\i\j\m\7\l\m\r\m\1\p\j\m\o\o\6\e\7\9\j\6\2\n\5\d\r\s\p\v\r\6\w\x\p\h\b\n\a\7\t\7\k\u\d\h\9\d\y\z\w\8\i\e\2\w\s\w\0\w\h\z\y\0\m\j\m\q\3\7\e\v\9\u\v\0\9\4\c\j\d\8\j\s\y\d\u\3\a\4\l\r\3\1\8\i\k\b\9\c\k\l\t\a\w\u\7\w\j\1\5\e\1\0\w\b\g\l\p\a\f\u\e\2\i\o\p\q\f\v\z\a\k\g\3\3\a\l\8\a\2\6\b\n\s\8\z\y\5\f\5\n\y\a\d\l\l\i\r\p\n\o\e\x\2\8\0\p\w\1\r\t\r\v\6\4\g\p\h\n\r\6\j\p\b\k\h\6\e\w\9\i\y\6\n\4\8\1\8\j\t\0\h\a\5\y\u\2\v\m\j\v\o\0\m\n\q\y\q\4\u\z\d\q\u\m\5\9\3\j\t\p\u ]] 00:27:27.810 00:27:27.810 real 0m3.685s 00:27:27.810 user 0m2.981s 00:27:27.810 sys 0m0.557s 00:27:27.810 05:46:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:27.810 05:46:31 -- common/autotest_common.sh@10 -- # set +x 00:27:27.810 ************************************ 00:27:27.810 END TEST dd_rw_offset 00:27:27.810 ************************************ 00:27:27.810 05:46:31 -- dd/basic_rw.sh@1 -- # cleanup 00:27:27.810 05:46:31 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:27:27.810 05:46:31 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:27.810 05:46:31 -- dd/common.sh@11 -- # local nvme_ref= 00:27:27.810 05:46:31 -- dd/common.sh@12 -- # local size=0xffff 00:27:27.810 05:46:31 -- dd/common.sh@14 -- # local bs=1048576 00:27:27.810 05:46:31 -- dd/common.sh@15 -- # local count=1 00:27:27.810 05:46:31 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:27.810 05:46:31 -- dd/common.sh@18 -- # gen_conf 00:27:27.810 05:46:31 -- dd/common.sh@31 -- # xtrace_disable 00:27:27.810 05:46:31 -- common/autotest_common.sh@10 -- # set +x 00:27:28.081 { 00:27:28.081 "subsystems": [ 00:27:28.081 { 00:27:28.081 "subsystem": "bdev", 00:27:28.081 "config": [ 00:27:28.081 { 00:27:28.081 "params": { 00:27:28.081 "trtype": "pcie", 00:27:28.081 "traddr": "0000:00:06.0", 00:27:28.081 "name": "Nvme0" 00:27:28.081 }, 00:27:28.081 "method": "bdev_nvme_attach_controller" 00:27:28.081 }, 00:27:28.081 { 00:27:28.081 "method": "bdev_wait_for_examine" 00:27:28.081 } 00:27:28.081 ] 00:27:28.081 } 00:27:28.081 ] 00:27:28.081 } 00:27:28.081 [2024-10-07 05:46:31.819521] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:28.081 [2024-10-07 05:46:31.819727] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177770 ] 00:27:28.081 [2024-10-07 05:46:31.991746] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.355 [2024-10-07 05:46:32.193802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.560  Copying: 1024/1024 [kB] (average 500 MBps) 00:27:29.560 00:27:29.560 05:46:33 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:29.560 00:27:29.560 real 0m42.689s 00:27:29.560 user 0m34.569s 00:27:29.560 sys 0m6.451s 00:27:29.560 05:46:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:29.560 05:46:33 -- common/autotest_common.sh@10 -- # set +x 00:27:29.560 ************************************ 00:27:29.560 END TEST spdk_dd_basic_rw 00:27:29.560 ************************************ 00:27:29.820 05:46:33 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:27:29.820 05:46:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:29.820 05:46:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:29.820 05:46:33 -- common/autotest_common.sh@10 -- # set +x 00:27:29.820 ************************************ 00:27:29.820 START TEST spdk_dd_posix 00:27:29.820 ************************************ 00:27:29.820 05:46:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:27:29.820 * Looking for test storage... 00:27:29.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:29.820 05:46:33 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:29.820 05:46:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:29.820 05:46:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:29.820 05:46:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:29.820 05:46:33 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:29.820 05:46:33 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:29.820 05:46:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:29.820 05:46:33 -- paths/export.sh@5 -- # export PATH 00:27:29.820 05:46:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:29.820 05:46:33 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:27:29.820 05:46:33 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:27:29.820 05:46:33 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:27:29.820 05:46:33 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:27:29.820 05:46:33 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:29.820 05:46:33 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:29.820 05:46:33 -- dd/posix.sh@130 -- # tests 00:27:29.820 05:46:33 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:27:29.820 * First test run, using AIO 00:27:29.820 05:46:33 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:27:29.820 05:46:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:29.820 05:46:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:29.820 05:46:33 -- common/autotest_common.sh@10 -- # set +x 00:27:29.820 ************************************ 00:27:29.820 START TEST dd_flag_append 00:27:29.820 ************************************ 00:27:29.820 05:46:33 -- common/autotest_common.sh@1104 -- # append 00:27:29.820 05:46:33 -- dd/posix.sh@16 -- # local dump0 00:27:29.820 05:46:33 -- dd/posix.sh@17 -- # local dump1 00:27:29.820 05:46:33 -- dd/posix.sh@19 -- # gen_bytes 32 00:27:29.820 05:46:33 -- dd/common.sh@98 -- # xtrace_disable 00:27:29.820 05:46:33 -- common/autotest_common.sh@10 -- # set +x 00:27:29.820 05:46:33 -- dd/posix.sh@19 -- # dump0=l1nn08rnz06j7do3o2w8ek0wzhrs6rei 00:27:29.820 05:46:33 -- dd/posix.sh@20 -- # gen_bytes 32 00:27:29.820 05:46:33 -- dd/common.sh@98 -- # xtrace_disable 00:27:29.820 05:46:33 -- common/autotest_common.sh@10 -- # set +x 00:27:29.820 05:46:33 -- dd/posix.sh@20 -- # dump1=oqpm9ai3w94jr1nhmg332xdzoeuzki0e 00:27:29.820 05:46:33 -- dd/posix.sh@22 -- # printf %s l1nn08rnz06j7do3o2w8ek0wzhrs6rei 00:27:29.820 05:46:33 -- dd/posix.sh@23 -- # printf %s oqpm9ai3w94jr1nhmg332xdzoeuzki0e 00:27:29.820 05:46:33 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:27:29.820 [2024-10-07 05:46:33.736845] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:29.820 [2024-10-07 05:46:33.737052] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177851 ] 00:27:30.080 [2024-10-07 05:46:33.906852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.338 [2024-10-07 05:46:34.098102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.532  Copying: 32/32 [B] (average 31 kBps) 00:27:31.532 00:27:31.532 05:46:35 -- dd/posix.sh@27 -- # [[ oqpm9ai3w94jr1nhmg332xdzoeuzki0el1nn08rnz06j7do3o2w8ek0wzhrs6rei == \o\q\p\m\9\a\i\3\w\9\4\j\r\1\n\h\m\g\3\3\2\x\d\z\o\e\u\z\k\i\0\e\l\1\n\n\0\8\r\n\z\0\6\j\7\d\o\3\o\2\w\8\e\k\0\w\z\h\r\s\6\r\e\i ]] 00:27:31.532 00:27:31.533 real 0m1.752s 00:27:31.533 user 0m1.320s 00:27:31.533 sys 0m0.304s 00:27:31.533 05:46:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:31.533 05:46:35 -- common/autotest_common.sh@10 -- # set +x 00:27:31.533 ************************************ 00:27:31.533 END TEST dd_flag_append 00:27:31.533 ************************************ 00:27:31.533 05:46:35 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:27:31.533 05:46:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:31.533 05:46:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:31.533 05:46:35 -- common/autotest_common.sh@10 -- # set +x 00:27:31.533 ************************************ 00:27:31.533 START TEST dd_flag_directory 00:27:31.533 ************************************ 00:27:31.533 05:46:35 -- common/autotest_common.sh@1104 -- # directory 00:27:31.533 05:46:35 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:31.533 05:46:35 -- common/autotest_common.sh@640 -- # local es=0 00:27:31.533 05:46:35 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:31.533 05:46:35 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:31.533 05:46:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:31.533 05:46:35 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:31.533 05:46:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:31.533 05:46:35 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:31.533 05:46:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:31.533 05:46:35 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:31.533 05:46:35 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:31.533 05:46:35 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:31.792 [2024-10-07 05:46:35.535665] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:31.792 [2024-10-07 05:46:35.535876] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177898 ] 00:27:31.792 [2024-10-07 05:46:35.707282] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.051 [2024-10-07 05:46:35.897081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.310 [2024-10-07 05:46:36.177153] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:32.310 [2024-10-07 05:46:36.177254] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:32.310 [2024-10-07 05:46:36.177285] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:32.878 [2024-10-07 05:46:36.811563] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:33.445 05:46:37 -- common/autotest_common.sh@643 -- # es=236 00:27:33.445 05:46:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:33.445 05:46:37 -- common/autotest_common.sh@652 -- # es=108 00:27:33.445 05:46:37 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:33.445 05:46:37 -- common/autotest_common.sh@660 -- # es=1 00:27:33.445 05:46:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:33.445 05:46:37 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:27:33.445 05:46:37 -- common/autotest_common.sh@640 -- # local es=0 00:27:33.445 05:46:37 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:27:33.445 05:46:37 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:33.445 05:46:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:33.445 05:46:37 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:33.445 05:46:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:33.445 05:46:37 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:33.445 05:46:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:33.445 05:46:37 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:33.445 05:46:37 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:33.445 05:46:37 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:27:33.445 [2024-10-07 05:46:37.250124] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:33.445 [2024-10-07 05:46:37.250331] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177925 ] 00:27:33.445 [2024-10-07 05:46:37.421083] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.703 [2024-10-07 05:46:37.609561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.963 [2024-10-07 05:46:37.893153] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:33.963 [2024-10-07 05:46:37.893241] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:33.963 [2024-10-07 05:46:37.893269] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:34.901 [2024-10-07 05:46:38.526732] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:35.159 05:46:38 -- common/autotest_common.sh@643 -- # es=236 00:27:35.159 05:46:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:35.159 05:46:38 -- common/autotest_common.sh@652 -- # es=108 00:27:35.159 05:46:38 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:35.159 05:46:38 -- common/autotest_common.sh@660 -- # es=1 00:27:35.159 05:46:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:35.159 00:27:35.159 real 0m3.427s 00:27:35.159 user 0m2.598s 00:27:35.159 sys 0m0.627s 00:27:35.159 05:46:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:35.159 05:46:38 -- common/autotest_common.sh@10 -- # set +x 00:27:35.160 ************************************ 00:27:35.160 END TEST dd_flag_directory 00:27:35.160 ************************************ 00:27:35.160 05:46:38 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:27:35.160 05:46:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:35.160 05:46:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:35.160 05:46:38 -- common/autotest_common.sh@10 -- # set +x 00:27:35.160 ************************************ 00:27:35.160 START TEST dd_flag_nofollow 00:27:35.160 ************************************ 00:27:35.160 05:46:38 -- common/autotest_common.sh@1104 -- # nofollow 00:27:35.160 05:46:38 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:27:35.160 05:46:38 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:27:35.160 05:46:38 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:27:35.160 05:46:38 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:27:35.160 05:46:38 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:35.160 05:46:38 -- common/autotest_common.sh@640 -- # local es=0 00:27:35.160 05:46:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:35.160 05:46:38 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:35.160 05:46:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:35.160 05:46:38 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:35.160 05:46:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:35.160 05:46:38 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:35.160 05:46:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:35.160 05:46:38 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:35.160 05:46:38 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:35.160 05:46:38 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:35.160 [2024-10-07 05:46:39.027951] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:35.160 [2024-10-07 05:46:39.028393] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177970 ] 00:27:35.418 [2024-10-07 05:46:39.195059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.418 [2024-10-07 05:46:39.377696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.986 [2024-10-07 05:46:39.657860] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:27:35.986 [2024-10-07 05:46:39.657961] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:27:35.986 [2024-10-07 05:46:39.658007] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:36.554 [2024-10-07 05:46:40.288908] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:36.813 05:46:40 -- common/autotest_common.sh@643 -- # es=216 00:27:36.813 05:46:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:36.813 05:46:40 -- common/autotest_common.sh@652 -- # es=88 00:27:36.813 05:46:40 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:36.813 05:46:40 -- common/autotest_common.sh@660 -- # es=1 00:27:36.813 05:46:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:36.813 05:46:40 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:27:36.813 05:46:40 -- common/autotest_common.sh@640 -- # local es=0 00:27:36.813 05:46:40 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:27:36.813 05:46:40 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:36.813 05:46:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:36.813 05:46:40 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:36.813 05:46:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:36.813 05:46:40 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:36.813 05:46:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:36.813 05:46:40 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:36.813 05:46:40 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:36.813 05:46:40 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:27:36.813 [2024-10-07 05:46:40.712741] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:36.813 [2024-10-07 05:46:40.712885] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177996 ] 00:27:37.072 [2024-10-07 05:46:40.867030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.330 [2024-10-07 05:46:41.051249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.589 [2024-10-07 05:46:41.331553] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:27:37.589 [2024-10-07 05:46:41.331645] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:27:37.589 [2024-10-07 05:46:41.331674] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:38.155 [2024-10-07 05:46:41.964745] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:38.414 05:46:42 -- common/autotest_common.sh@643 -- # es=216 00:27:38.414 05:46:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:38.414 05:46:42 -- common/autotest_common.sh@652 -- # es=88 00:27:38.414 05:46:42 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:38.414 05:46:42 -- common/autotest_common.sh@660 -- # es=1 00:27:38.414 05:46:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:38.414 05:46:42 -- dd/posix.sh@46 -- # gen_bytes 512 00:27:38.414 05:46:42 -- dd/common.sh@98 -- # xtrace_disable 00:27:38.414 05:46:42 -- common/autotest_common.sh@10 -- # set +x 00:27:38.414 05:46:42 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:38.414 [2024-10-07 05:46:42.384025] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:38.414 [2024-10-07 05:46:42.384181] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178020 ] 00:27:38.673 [2024-10-07 05:46:42.537935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.932 [2024-10-07 05:46:42.723425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.126  Copying: 512/512 [B] (average 500 kBps) 00:27:40.126 00:27:40.126 05:46:44 -- dd/posix.sh@49 -- # [[ w81z08rw37mrfhrinrd0lvngb1do4953ute0x10v7g2k5fhfe2i7vmjj8sm3tx8sux6l3uft6nuzgxy7heinbhfsk7zpvzjgy4c6skf5ue62z1d9521e2grdxc0m8a5n02pqsvoe61qb31f8g9zqc6jia8hrh2hdyi4ysak3pe4wgaae4k9mbcstc6kc8l5tgplvmkemq7b1r5r2snisk0dlfzj265zup26efxt4g0al4v86gjxjg7fdzjebl10bqdxjg1wwoxyeo8b5mqfbqhy73emz9mhgccb4q5j5zzmybhxjrw14mn0ahclmvnoerufxctwyim1o1brodrsn1voepac3imkn7o1i80mkodwhlnjgkpp3tsp2bb87qu7g7r5m13iyw0bvhdlmn9r1xnbgwufijjrej13qh075gma1m9qa50eek8nk88d4g81ubjwh2lv09c3cpu23ct2b8unr29twlvt354aeb8jzdoumgwe5z2b0by4kpgihao0j == \w\8\1\z\0\8\r\w\3\7\m\r\f\h\r\i\n\r\d\0\l\v\n\g\b\1\d\o\4\9\5\3\u\t\e\0\x\1\0\v\7\g\2\k\5\f\h\f\e\2\i\7\v\m\j\j\8\s\m\3\t\x\8\s\u\x\6\l\3\u\f\t\6\n\u\z\g\x\y\7\h\e\i\n\b\h\f\s\k\7\z\p\v\z\j\g\y\4\c\6\s\k\f\5\u\e\6\2\z\1\d\9\5\2\1\e\2\g\r\d\x\c\0\m\8\a\5\n\0\2\p\q\s\v\o\e\6\1\q\b\3\1\f\8\g\9\z\q\c\6\j\i\a\8\h\r\h\2\h\d\y\i\4\y\s\a\k\3\p\e\4\w\g\a\a\e\4\k\9\m\b\c\s\t\c\6\k\c\8\l\5\t\g\p\l\v\m\k\e\m\q\7\b\1\r\5\r\2\s\n\i\s\k\0\d\l\f\z\j\2\6\5\z\u\p\2\6\e\f\x\t\4\g\0\a\l\4\v\8\6\g\j\x\j\g\7\f\d\z\j\e\b\l\1\0\b\q\d\x\j\g\1\w\w\o\x\y\e\o\8\b\5\m\q\f\b\q\h\y\7\3\e\m\z\9\m\h\g\c\c\b\4\q\5\j\5\z\z\m\y\b\h\x\j\r\w\1\4\m\n\0\a\h\c\l\m\v\n\o\e\r\u\f\x\c\t\w\y\i\m\1\o\1\b\r\o\d\r\s\n\1\v\o\e\p\a\c\3\i\m\k\n\7\o\1\i\8\0\m\k\o\d\w\h\l\n\j\g\k\p\p\3\t\s\p\2\b\b\8\7\q\u\7\g\7\r\5\m\1\3\i\y\w\0\b\v\h\d\l\m\n\9\r\1\x\n\b\g\w\u\f\i\j\j\r\e\j\1\3\q\h\0\7\5\g\m\a\1\m\9\q\a\5\0\e\e\k\8\n\k\8\8\d\4\g\8\1\u\b\j\w\h\2\l\v\0\9\c\3\c\p\u\2\3\c\t\2\b\8\u\n\r\2\9\t\w\l\v\t\3\5\4\a\e\b\8\j\z\d\o\u\m\g\w\e\5\z\2\b\0\b\y\4\k\p\g\i\h\a\o\0\j ]] 00:27:40.126 00:27:40.126 real 0m5.064s 00:27:40.126 user 0m3.979s 00:27:40.126 sys 0m0.755s 00:27:40.126 05:46:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:40.126 ************************************ 00:27:40.126 END TEST dd_flag_nofollow 00:27:40.126 ************************************ 00:27:40.126 05:46:44 -- common/autotest_common.sh@10 -- # set +x 00:27:40.126 05:46:44 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:27:40.126 05:46:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:40.126 05:46:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:40.126 05:46:44 -- common/autotest_common.sh@10 -- # set +x 00:27:40.126 ************************************ 00:27:40.126 START TEST dd_flag_noatime 00:27:40.126 ************************************ 00:27:40.126 05:46:44 -- common/autotest_common.sh@1104 -- # noatime 00:27:40.126 05:46:44 -- dd/posix.sh@53 -- # local atime_if 00:27:40.126 05:46:44 -- dd/posix.sh@54 -- # local atime_of 00:27:40.126 05:46:44 -- dd/posix.sh@58 -- # gen_bytes 512 00:27:40.126 05:46:44 -- dd/common.sh@98 -- # xtrace_disable 00:27:40.126 05:46:44 -- common/autotest_common.sh@10 -- # set +x 00:27:40.126 05:46:44 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:40.126 05:46:44 -- dd/posix.sh@60 -- # atime_if=1728280003 00:27:40.126 05:46:44 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:40.126 05:46:44 -- dd/posix.sh@61 -- # atime_of=1728280004 00:27:40.126 05:46:44 -- dd/posix.sh@66 -- # sleep 1 00:27:41.532 05:46:45 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:41.532 [2024-10-07 05:46:45.153137] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:41.532 [2024-10-07 05:46:45.153584] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178084 ] 00:27:41.532 [2024-10-07 05:46:45.308056] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.532 [2024-10-07 05:46:45.478708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.727  Copying: 512/512 [B] (average 500 kBps) 00:27:42.727 00:27:42.727 05:46:46 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:42.727 05:46:46 -- dd/posix.sh@69 -- # (( atime_if == 1728280003 )) 00:27:42.727 05:46:46 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:42.727 05:46:46 -- dd/posix.sh@70 -- # (( atime_of == 1728280004 )) 00:27:42.727 05:46:46 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:42.986 [2024-10-07 05:46:46.746974] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:42.986 [2024-10-07 05:46:46.747334] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178107 ] 00:27:42.986 [2024-10-07 05:46:46.915532] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.245 [2024-10-07 05:46:47.083728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.440  Copying: 512/512 [B] (average 500 kBps) 00:27:44.440 00:27:44.440 05:46:48 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:44.440 05:46:48 -- dd/posix.sh@73 -- # (( atime_if < 1728280007 )) 00:27:44.440 00:27:44.440 real 0m4.205s 00:27:44.440 user 0m2.468s 00:27:44.440 sys 0m0.461s 00:27:44.440 05:46:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:44.440 ************************************ 00:27:44.440 END TEST dd_flag_noatime 00:27:44.440 ************************************ 00:27:44.440 05:46:48 -- common/autotest_common.sh@10 -- # set +x 00:27:44.440 05:46:48 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:27:44.440 05:46:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:44.440 05:46:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:44.440 05:46:48 -- common/autotest_common.sh@10 -- # set +x 00:27:44.440 ************************************ 00:27:44.440 START TEST dd_flags_misc 00:27:44.440 ************************************ 00:27:44.440 05:46:48 -- common/autotest_common.sh@1104 -- # io 00:27:44.440 05:46:48 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:27:44.440 05:46:48 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:27:44.440 05:46:48 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:27:44.440 05:46:48 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:27:44.440 05:46:48 -- dd/posix.sh@86 -- # gen_bytes 512 00:27:44.440 05:46:48 -- dd/common.sh@98 -- # xtrace_disable 00:27:44.440 05:46:48 -- common/autotest_common.sh@10 -- # set +x 00:27:44.440 05:46:48 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:44.440 05:46:48 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:27:44.440 [2024-10-07 05:46:48.411084] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:44.440 [2024-10-07 05:46:48.411809] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178151 ] 00:27:44.699 [2024-10-07 05:46:48.592463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.958 [2024-10-07 05:46:48.822766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.152  Copying: 512/512 [B] (average 500 kBps) 00:27:46.152 00:27:46.152 05:46:50 -- dd/posix.sh@93 -- # [[ jkda7822gphpay5gq7h0991otrbsl18xkaeux6xvdwh2a3wotwud4pozwlu6ms62l33i8kz4eo6ts9yh73bep1xd2xgqsgf8ndev27dsk0wzahwjffun2sr83koj7zz080vfg1qxxq1ho3juhpzjnc7syl2pofq3hrh1iuny4aweer8qykut5mvbb9pp4ty3pwjhk18d6yyn46p0r566aqchgndexevll8g6a4vs38o54pt7h1uibknkrpp0f1c8z33ghixrtjhh2lpbno8r6lvj8a7zavsoi8rkpvqa9s1gvpycxa4fz7aa5l93fn0i8r5he58h5nast1anutnr0k1fbyx4lrnuug3kwdbsx95mu7who2iv1ayor0064dn9kkykzd7z9coavbvutz352bqskpwi35fs68s4vha659xrvapjplm1q67yadfvmgearqwlocp5ehwj0dmvot9uu4j4ucj6m3g2gwqv90ywld879pjtj3m3prwe411ux1q3 == \j\k\d\a\7\8\2\2\g\p\h\p\a\y\5\g\q\7\h\0\9\9\1\o\t\r\b\s\l\1\8\x\k\a\e\u\x\6\x\v\d\w\h\2\a\3\w\o\t\w\u\d\4\p\o\z\w\l\u\6\m\s\6\2\l\3\3\i\8\k\z\4\e\o\6\t\s\9\y\h\7\3\b\e\p\1\x\d\2\x\g\q\s\g\f\8\n\d\e\v\2\7\d\s\k\0\w\z\a\h\w\j\f\f\u\n\2\s\r\8\3\k\o\j\7\z\z\0\8\0\v\f\g\1\q\x\x\q\1\h\o\3\j\u\h\p\z\j\n\c\7\s\y\l\2\p\o\f\q\3\h\r\h\1\i\u\n\y\4\a\w\e\e\r\8\q\y\k\u\t\5\m\v\b\b\9\p\p\4\t\y\3\p\w\j\h\k\1\8\d\6\y\y\n\4\6\p\0\r\5\6\6\a\q\c\h\g\n\d\e\x\e\v\l\l\8\g\6\a\4\v\s\3\8\o\5\4\p\t\7\h\1\u\i\b\k\n\k\r\p\p\0\f\1\c\8\z\3\3\g\h\i\x\r\t\j\h\h\2\l\p\b\n\o\8\r\6\l\v\j\8\a\7\z\a\v\s\o\i\8\r\k\p\v\q\a\9\s\1\g\v\p\y\c\x\a\4\f\z\7\a\a\5\l\9\3\f\n\0\i\8\r\5\h\e\5\8\h\5\n\a\s\t\1\a\n\u\t\n\r\0\k\1\f\b\y\x\4\l\r\n\u\u\g\3\k\w\d\b\s\x\9\5\m\u\7\w\h\o\2\i\v\1\a\y\o\r\0\0\6\4\d\n\9\k\k\y\k\z\d\7\z\9\c\o\a\v\b\v\u\t\z\3\5\2\b\q\s\k\p\w\i\3\5\f\s\6\8\s\4\v\h\a\6\5\9\x\r\v\a\p\j\p\l\m\1\q\6\7\y\a\d\f\v\m\g\e\a\r\q\w\l\o\c\p\5\e\h\w\j\0\d\m\v\o\t\9\u\u\4\j\4\u\c\j\6\m\3\g\2\g\w\q\v\9\0\y\w\l\d\8\7\9\p\j\t\j\3\m\3\p\r\w\e\4\1\1\u\x\1\q\3 ]] 00:27:46.152 05:46:50 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:46.152 05:46:50 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:27:46.152 [2024-10-07 05:46:50.060508] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:46.152 [2024-10-07 05:46:50.060952] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178179 ] 00:27:46.410 [2024-10-07 05:46:50.215827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.410 [2024-10-07 05:46:50.370378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.605  Copying: 512/512 [B] (average 500 kBps) 00:27:47.605 00:27:47.605 05:46:51 -- dd/posix.sh@93 -- # [[ jkda7822gphpay5gq7h0991otrbsl18xkaeux6xvdwh2a3wotwud4pozwlu6ms62l33i8kz4eo6ts9yh73bep1xd2xgqsgf8ndev27dsk0wzahwjffun2sr83koj7zz080vfg1qxxq1ho3juhpzjnc7syl2pofq3hrh1iuny4aweer8qykut5mvbb9pp4ty3pwjhk18d6yyn46p0r566aqchgndexevll8g6a4vs38o54pt7h1uibknkrpp0f1c8z33ghixrtjhh2lpbno8r6lvj8a7zavsoi8rkpvqa9s1gvpycxa4fz7aa5l93fn0i8r5he58h5nast1anutnr0k1fbyx4lrnuug3kwdbsx95mu7who2iv1ayor0064dn9kkykzd7z9coavbvutz352bqskpwi35fs68s4vha659xrvapjplm1q67yadfvmgearqwlocp5ehwj0dmvot9uu4j4ucj6m3g2gwqv90ywld879pjtj3m3prwe411ux1q3 == \j\k\d\a\7\8\2\2\g\p\h\p\a\y\5\g\q\7\h\0\9\9\1\o\t\r\b\s\l\1\8\x\k\a\e\u\x\6\x\v\d\w\h\2\a\3\w\o\t\w\u\d\4\p\o\z\w\l\u\6\m\s\6\2\l\3\3\i\8\k\z\4\e\o\6\t\s\9\y\h\7\3\b\e\p\1\x\d\2\x\g\q\s\g\f\8\n\d\e\v\2\7\d\s\k\0\w\z\a\h\w\j\f\f\u\n\2\s\r\8\3\k\o\j\7\z\z\0\8\0\v\f\g\1\q\x\x\q\1\h\o\3\j\u\h\p\z\j\n\c\7\s\y\l\2\p\o\f\q\3\h\r\h\1\i\u\n\y\4\a\w\e\e\r\8\q\y\k\u\t\5\m\v\b\b\9\p\p\4\t\y\3\p\w\j\h\k\1\8\d\6\y\y\n\4\6\p\0\r\5\6\6\a\q\c\h\g\n\d\e\x\e\v\l\l\8\g\6\a\4\v\s\3\8\o\5\4\p\t\7\h\1\u\i\b\k\n\k\r\p\p\0\f\1\c\8\z\3\3\g\h\i\x\r\t\j\h\h\2\l\p\b\n\o\8\r\6\l\v\j\8\a\7\z\a\v\s\o\i\8\r\k\p\v\q\a\9\s\1\g\v\p\y\c\x\a\4\f\z\7\a\a\5\l\9\3\f\n\0\i\8\r\5\h\e\5\8\h\5\n\a\s\t\1\a\n\u\t\n\r\0\k\1\f\b\y\x\4\l\r\n\u\u\g\3\k\w\d\b\s\x\9\5\m\u\7\w\h\o\2\i\v\1\a\y\o\r\0\0\6\4\d\n\9\k\k\y\k\z\d\7\z\9\c\o\a\v\b\v\u\t\z\3\5\2\b\q\s\k\p\w\i\3\5\f\s\6\8\s\4\v\h\a\6\5\9\x\r\v\a\p\j\p\l\m\1\q\6\7\y\a\d\f\v\m\g\e\a\r\q\w\l\o\c\p\5\e\h\w\j\0\d\m\v\o\t\9\u\u\4\j\4\u\c\j\6\m\3\g\2\g\w\q\v\9\0\y\w\l\d\8\7\9\p\j\t\j\3\m\3\p\r\w\e\4\1\1\u\x\1\q\3 ]] 00:27:47.605 05:46:51 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:47.605 05:46:51 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:27:47.864 [2024-10-07 05:46:51.632594] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:47.864 [2024-10-07 05:46:51.633108] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178200 ] 00:27:47.864 [2024-10-07 05:46:51.803616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.128 [2024-10-07 05:46:51.973621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.325  Copying: 512/512 [B] (average 166 kBps) 00:27:49.325 00:27:49.325 05:46:53 -- dd/posix.sh@93 -- # [[ jkda7822gphpay5gq7h0991otrbsl18xkaeux6xvdwh2a3wotwud4pozwlu6ms62l33i8kz4eo6ts9yh73bep1xd2xgqsgf8ndev27dsk0wzahwjffun2sr83koj7zz080vfg1qxxq1ho3juhpzjnc7syl2pofq3hrh1iuny4aweer8qykut5mvbb9pp4ty3pwjhk18d6yyn46p0r566aqchgndexevll8g6a4vs38o54pt7h1uibknkrpp0f1c8z33ghixrtjhh2lpbno8r6lvj8a7zavsoi8rkpvqa9s1gvpycxa4fz7aa5l93fn0i8r5he58h5nast1anutnr0k1fbyx4lrnuug3kwdbsx95mu7who2iv1ayor0064dn9kkykzd7z9coavbvutz352bqskpwi35fs68s4vha659xrvapjplm1q67yadfvmgearqwlocp5ehwj0dmvot9uu4j4ucj6m3g2gwqv90ywld879pjtj3m3prwe411ux1q3 == \j\k\d\a\7\8\2\2\g\p\h\p\a\y\5\g\q\7\h\0\9\9\1\o\t\r\b\s\l\1\8\x\k\a\e\u\x\6\x\v\d\w\h\2\a\3\w\o\t\w\u\d\4\p\o\z\w\l\u\6\m\s\6\2\l\3\3\i\8\k\z\4\e\o\6\t\s\9\y\h\7\3\b\e\p\1\x\d\2\x\g\q\s\g\f\8\n\d\e\v\2\7\d\s\k\0\w\z\a\h\w\j\f\f\u\n\2\s\r\8\3\k\o\j\7\z\z\0\8\0\v\f\g\1\q\x\x\q\1\h\o\3\j\u\h\p\z\j\n\c\7\s\y\l\2\p\o\f\q\3\h\r\h\1\i\u\n\y\4\a\w\e\e\r\8\q\y\k\u\t\5\m\v\b\b\9\p\p\4\t\y\3\p\w\j\h\k\1\8\d\6\y\y\n\4\6\p\0\r\5\6\6\a\q\c\h\g\n\d\e\x\e\v\l\l\8\g\6\a\4\v\s\3\8\o\5\4\p\t\7\h\1\u\i\b\k\n\k\r\p\p\0\f\1\c\8\z\3\3\g\h\i\x\r\t\j\h\h\2\l\p\b\n\o\8\r\6\l\v\j\8\a\7\z\a\v\s\o\i\8\r\k\p\v\q\a\9\s\1\g\v\p\y\c\x\a\4\f\z\7\a\a\5\l\9\3\f\n\0\i\8\r\5\h\e\5\8\h\5\n\a\s\t\1\a\n\u\t\n\r\0\k\1\f\b\y\x\4\l\r\n\u\u\g\3\k\w\d\b\s\x\9\5\m\u\7\w\h\o\2\i\v\1\a\y\o\r\0\0\6\4\d\n\9\k\k\y\k\z\d\7\z\9\c\o\a\v\b\v\u\t\z\3\5\2\b\q\s\k\p\w\i\3\5\f\s\6\8\s\4\v\h\a\6\5\9\x\r\v\a\p\j\p\l\m\1\q\6\7\y\a\d\f\v\m\g\e\a\r\q\w\l\o\c\p\5\e\h\w\j\0\d\m\v\o\t\9\u\u\4\j\4\u\c\j\6\m\3\g\2\g\w\q\v\9\0\y\w\l\d\8\7\9\p\j\t\j\3\m\3\p\r\w\e\4\1\1\u\x\1\q\3 ]] 00:27:49.325 05:46:53 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:49.325 05:46:53 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:27:49.325 [2024-10-07 05:46:53.235694] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:49.325 [2024-10-07 05:46:53.236179] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178227 ] 00:27:49.584 [2024-10-07 05:46:53.407569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.843 [2024-10-07 05:46:53.574899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.040  Copying: 512/512 [B] (average 125 kBps) 00:27:51.040 00:27:51.040 05:46:54 -- dd/posix.sh@93 -- # [[ jkda7822gphpay5gq7h0991otrbsl18xkaeux6xvdwh2a3wotwud4pozwlu6ms62l33i8kz4eo6ts9yh73bep1xd2xgqsgf8ndev27dsk0wzahwjffun2sr83koj7zz080vfg1qxxq1ho3juhpzjnc7syl2pofq3hrh1iuny4aweer8qykut5mvbb9pp4ty3pwjhk18d6yyn46p0r566aqchgndexevll8g6a4vs38o54pt7h1uibknkrpp0f1c8z33ghixrtjhh2lpbno8r6lvj8a7zavsoi8rkpvqa9s1gvpycxa4fz7aa5l93fn0i8r5he58h5nast1anutnr0k1fbyx4lrnuug3kwdbsx95mu7who2iv1ayor0064dn9kkykzd7z9coavbvutz352bqskpwi35fs68s4vha659xrvapjplm1q67yadfvmgearqwlocp5ehwj0dmvot9uu4j4ucj6m3g2gwqv90ywld879pjtj3m3prwe411ux1q3 == \j\k\d\a\7\8\2\2\g\p\h\p\a\y\5\g\q\7\h\0\9\9\1\o\t\r\b\s\l\1\8\x\k\a\e\u\x\6\x\v\d\w\h\2\a\3\w\o\t\w\u\d\4\p\o\z\w\l\u\6\m\s\6\2\l\3\3\i\8\k\z\4\e\o\6\t\s\9\y\h\7\3\b\e\p\1\x\d\2\x\g\q\s\g\f\8\n\d\e\v\2\7\d\s\k\0\w\z\a\h\w\j\f\f\u\n\2\s\r\8\3\k\o\j\7\z\z\0\8\0\v\f\g\1\q\x\x\q\1\h\o\3\j\u\h\p\z\j\n\c\7\s\y\l\2\p\o\f\q\3\h\r\h\1\i\u\n\y\4\a\w\e\e\r\8\q\y\k\u\t\5\m\v\b\b\9\p\p\4\t\y\3\p\w\j\h\k\1\8\d\6\y\y\n\4\6\p\0\r\5\6\6\a\q\c\h\g\n\d\e\x\e\v\l\l\8\g\6\a\4\v\s\3\8\o\5\4\p\t\7\h\1\u\i\b\k\n\k\r\p\p\0\f\1\c\8\z\3\3\g\h\i\x\r\t\j\h\h\2\l\p\b\n\o\8\r\6\l\v\j\8\a\7\z\a\v\s\o\i\8\r\k\p\v\q\a\9\s\1\g\v\p\y\c\x\a\4\f\z\7\a\a\5\l\9\3\f\n\0\i\8\r\5\h\e\5\8\h\5\n\a\s\t\1\a\n\u\t\n\r\0\k\1\f\b\y\x\4\l\r\n\u\u\g\3\k\w\d\b\s\x\9\5\m\u\7\w\h\o\2\i\v\1\a\y\o\r\0\0\6\4\d\n\9\k\k\y\k\z\d\7\z\9\c\o\a\v\b\v\u\t\z\3\5\2\b\q\s\k\p\w\i\3\5\f\s\6\8\s\4\v\h\a\6\5\9\x\r\v\a\p\j\p\l\m\1\q\6\7\y\a\d\f\v\m\g\e\a\r\q\w\l\o\c\p\5\e\h\w\j\0\d\m\v\o\t\9\u\u\4\j\4\u\c\j\6\m\3\g\2\g\w\q\v\9\0\y\w\l\d\8\7\9\p\j\t\j\3\m\3\p\r\w\e\4\1\1\u\x\1\q\3 ]] 00:27:51.040 05:46:54 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:27:51.040 05:46:54 -- dd/posix.sh@86 -- # gen_bytes 512 00:27:51.040 05:46:54 -- dd/common.sh@98 -- # xtrace_disable 00:27:51.040 05:46:54 -- common/autotest_common.sh@10 -- # set +x 00:27:51.040 05:46:54 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:51.040 05:46:54 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:27:51.040 [2024-10-07 05:46:54.850986] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:51.040 [2024-10-07 05:46:54.851485] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178244 ] 00:27:51.040 [2024-10-07 05:46:55.017522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.299 [2024-10-07 05:46:55.194186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.495  Copying: 512/512 [B] (average 500 kBps) 00:27:52.495 00:27:52.495 05:46:56 -- dd/posix.sh@93 -- # [[ xcg62o2fqsaxaohzc6fwhnzwm4wrl4n872uuu6yrkn9cfyc0hdyjt8ws3ru03i1uppu3mfh8faiogqz9qfwwwhrznas7nmoh7tu0ql4wvhkvmb32pzz4zamz09h5olxtpqnw2o3hakctvik8magvzefkgr0dlbpzmhpy41zqq2dh4x69z9c6d80e6uoo502tpfi3cccpbtnqfr3a78r59h576ncvkanhhy9pou0n7bzhjo7fwjivdgkku6w5wohmojnzpqs3dwyd6s2xrsuv28ytj3tockd5v8z7n6u9tx7jmtkryf62jmi3pd568vf2nz8qv5zyo3w94pjdspqax9hq8n06vcqlt1bkgr122jlsc622iqaauvhohhfu5akk6cs0igiwfwdagjufkjjdvjwppvvqpr5omtr88goqfanzagl4a85mndlhgq6f054n28d0hb0tcv6h42i1cg4n16cm4kr5rn0bbmai02ndlilt4mxda6g9hn7kqxu98wrg == \x\c\g\6\2\o\2\f\q\s\a\x\a\o\h\z\c\6\f\w\h\n\z\w\m\4\w\r\l\4\n\8\7\2\u\u\u\6\y\r\k\n\9\c\f\y\c\0\h\d\y\j\t\8\w\s\3\r\u\0\3\i\1\u\p\p\u\3\m\f\h\8\f\a\i\o\g\q\z\9\q\f\w\w\w\h\r\z\n\a\s\7\n\m\o\h\7\t\u\0\q\l\4\w\v\h\k\v\m\b\3\2\p\z\z\4\z\a\m\z\0\9\h\5\o\l\x\t\p\q\n\w\2\o\3\h\a\k\c\t\v\i\k\8\m\a\g\v\z\e\f\k\g\r\0\d\l\b\p\z\m\h\p\y\4\1\z\q\q\2\d\h\4\x\6\9\z\9\c\6\d\8\0\e\6\u\o\o\5\0\2\t\p\f\i\3\c\c\c\p\b\t\n\q\f\r\3\a\7\8\r\5\9\h\5\7\6\n\c\v\k\a\n\h\h\y\9\p\o\u\0\n\7\b\z\h\j\o\7\f\w\j\i\v\d\g\k\k\u\6\w\5\w\o\h\m\o\j\n\z\p\q\s\3\d\w\y\d\6\s\2\x\r\s\u\v\2\8\y\t\j\3\t\o\c\k\d\5\v\8\z\7\n\6\u\9\t\x\7\j\m\t\k\r\y\f\6\2\j\m\i\3\p\d\5\6\8\v\f\2\n\z\8\q\v\5\z\y\o\3\w\9\4\p\j\d\s\p\q\a\x\9\h\q\8\n\0\6\v\c\q\l\t\1\b\k\g\r\1\2\2\j\l\s\c\6\2\2\i\q\a\a\u\v\h\o\h\h\f\u\5\a\k\k\6\c\s\0\i\g\i\w\f\w\d\a\g\j\u\f\k\j\j\d\v\j\w\p\p\v\v\q\p\r\5\o\m\t\r\8\8\g\o\q\f\a\n\z\a\g\l\4\a\8\5\m\n\d\l\h\g\q\6\f\0\5\4\n\2\8\d\0\h\b\0\t\c\v\6\h\4\2\i\1\c\g\4\n\1\6\c\m\4\k\r\5\r\n\0\b\b\m\a\i\0\2\n\d\l\i\l\t\4\m\x\d\a\6\g\9\h\n\7\k\q\x\u\9\8\w\r\g ]] 00:27:52.495 05:46:56 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:52.495 05:46:56 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:27:52.495 [2024-10-07 05:46:56.445120] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:52.495 [2024-10-07 05:46:56.445507] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178272 ] 00:27:52.754 [2024-10-07 05:46:56.593355] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.013 [2024-10-07 05:46:56.748954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.209  Copying: 512/512 [B] (average 500 kBps) 00:27:54.209 00:27:54.209 05:46:57 -- dd/posix.sh@93 -- # [[ xcg62o2fqsaxaohzc6fwhnzwm4wrl4n872uuu6yrkn9cfyc0hdyjt8ws3ru03i1uppu3mfh8faiogqz9qfwwwhrznas7nmoh7tu0ql4wvhkvmb32pzz4zamz09h5olxtpqnw2o3hakctvik8magvzefkgr0dlbpzmhpy41zqq2dh4x69z9c6d80e6uoo502tpfi3cccpbtnqfr3a78r59h576ncvkanhhy9pou0n7bzhjo7fwjivdgkku6w5wohmojnzpqs3dwyd6s2xrsuv28ytj3tockd5v8z7n6u9tx7jmtkryf62jmi3pd568vf2nz8qv5zyo3w94pjdspqax9hq8n06vcqlt1bkgr122jlsc622iqaauvhohhfu5akk6cs0igiwfwdagjufkjjdvjwppvvqpr5omtr88goqfanzagl4a85mndlhgq6f054n28d0hb0tcv6h42i1cg4n16cm4kr5rn0bbmai02ndlilt4mxda6g9hn7kqxu98wrg == \x\c\g\6\2\o\2\f\q\s\a\x\a\o\h\z\c\6\f\w\h\n\z\w\m\4\w\r\l\4\n\8\7\2\u\u\u\6\y\r\k\n\9\c\f\y\c\0\h\d\y\j\t\8\w\s\3\r\u\0\3\i\1\u\p\p\u\3\m\f\h\8\f\a\i\o\g\q\z\9\q\f\w\w\w\h\r\z\n\a\s\7\n\m\o\h\7\t\u\0\q\l\4\w\v\h\k\v\m\b\3\2\p\z\z\4\z\a\m\z\0\9\h\5\o\l\x\t\p\q\n\w\2\o\3\h\a\k\c\t\v\i\k\8\m\a\g\v\z\e\f\k\g\r\0\d\l\b\p\z\m\h\p\y\4\1\z\q\q\2\d\h\4\x\6\9\z\9\c\6\d\8\0\e\6\u\o\o\5\0\2\t\p\f\i\3\c\c\c\p\b\t\n\q\f\r\3\a\7\8\r\5\9\h\5\7\6\n\c\v\k\a\n\h\h\y\9\p\o\u\0\n\7\b\z\h\j\o\7\f\w\j\i\v\d\g\k\k\u\6\w\5\w\o\h\m\o\j\n\z\p\q\s\3\d\w\y\d\6\s\2\x\r\s\u\v\2\8\y\t\j\3\t\o\c\k\d\5\v\8\z\7\n\6\u\9\t\x\7\j\m\t\k\r\y\f\6\2\j\m\i\3\p\d\5\6\8\v\f\2\n\z\8\q\v\5\z\y\o\3\w\9\4\p\j\d\s\p\q\a\x\9\h\q\8\n\0\6\v\c\q\l\t\1\b\k\g\r\1\2\2\j\l\s\c\6\2\2\i\q\a\a\u\v\h\o\h\h\f\u\5\a\k\k\6\c\s\0\i\g\i\w\f\w\d\a\g\j\u\f\k\j\j\d\v\j\w\p\p\v\v\q\p\r\5\o\m\t\r\8\8\g\o\q\f\a\n\z\a\g\l\4\a\8\5\m\n\d\l\h\g\q\6\f\0\5\4\n\2\8\d\0\h\b\0\t\c\v\6\h\4\2\i\1\c\g\4\n\1\6\c\m\4\k\r\5\r\n\0\b\b\m\a\i\0\2\n\d\l\i\l\t\4\m\x\d\a\6\g\9\h\n\7\k\q\x\u\9\8\w\r\g ]] 00:27:54.209 05:46:57 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:54.209 05:46:57 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:27:54.209 [2024-10-07 05:46:58.014457] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:54.209 [2024-10-07 05:46:58.014963] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178290 ] 00:27:54.209 [2024-10-07 05:46:58.183997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.467 [2024-10-07 05:46:58.351710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.662  Copying: 512/512 [B] (average 166 kBps) 00:27:55.662 00:27:55.662 05:46:59 -- dd/posix.sh@93 -- # [[ xcg62o2fqsaxaohzc6fwhnzwm4wrl4n872uuu6yrkn9cfyc0hdyjt8ws3ru03i1uppu3mfh8faiogqz9qfwwwhrznas7nmoh7tu0ql4wvhkvmb32pzz4zamz09h5olxtpqnw2o3hakctvik8magvzefkgr0dlbpzmhpy41zqq2dh4x69z9c6d80e6uoo502tpfi3cccpbtnqfr3a78r59h576ncvkanhhy9pou0n7bzhjo7fwjivdgkku6w5wohmojnzpqs3dwyd6s2xrsuv28ytj3tockd5v8z7n6u9tx7jmtkryf62jmi3pd568vf2nz8qv5zyo3w94pjdspqax9hq8n06vcqlt1bkgr122jlsc622iqaauvhohhfu5akk6cs0igiwfwdagjufkjjdvjwppvvqpr5omtr88goqfanzagl4a85mndlhgq6f054n28d0hb0tcv6h42i1cg4n16cm4kr5rn0bbmai02ndlilt4mxda6g9hn7kqxu98wrg == \x\c\g\6\2\o\2\f\q\s\a\x\a\o\h\z\c\6\f\w\h\n\z\w\m\4\w\r\l\4\n\8\7\2\u\u\u\6\y\r\k\n\9\c\f\y\c\0\h\d\y\j\t\8\w\s\3\r\u\0\3\i\1\u\p\p\u\3\m\f\h\8\f\a\i\o\g\q\z\9\q\f\w\w\w\h\r\z\n\a\s\7\n\m\o\h\7\t\u\0\q\l\4\w\v\h\k\v\m\b\3\2\p\z\z\4\z\a\m\z\0\9\h\5\o\l\x\t\p\q\n\w\2\o\3\h\a\k\c\t\v\i\k\8\m\a\g\v\z\e\f\k\g\r\0\d\l\b\p\z\m\h\p\y\4\1\z\q\q\2\d\h\4\x\6\9\z\9\c\6\d\8\0\e\6\u\o\o\5\0\2\t\p\f\i\3\c\c\c\p\b\t\n\q\f\r\3\a\7\8\r\5\9\h\5\7\6\n\c\v\k\a\n\h\h\y\9\p\o\u\0\n\7\b\z\h\j\o\7\f\w\j\i\v\d\g\k\k\u\6\w\5\w\o\h\m\o\j\n\z\p\q\s\3\d\w\y\d\6\s\2\x\r\s\u\v\2\8\y\t\j\3\t\o\c\k\d\5\v\8\z\7\n\6\u\9\t\x\7\j\m\t\k\r\y\f\6\2\j\m\i\3\p\d\5\6\8\v\f\2\n\z\8\q\v\5\z\y\o\3\w\9\4\p\j\d\s\p\q\a\x\9\h\q\8\n\0\6\v\c\q\l\t\1\b\k\g\r\1\2\2\j\l\s\c\6\2\2\i\q\a\a\u\v\h\o\h\h\f\u\5\a\k\k\6\c\s\0\i\g\i\w\f\w\d\a\g\j\u\f\k\j\j\d\v\j\w\p\p\v\v\q\p\r\5\o\m\t\r\8\8\g\o\q\f\a\n\z\a\g\l\4\a\8\5\m\n\d\l\h\g\q\6\f\0\5\4\n\2\8\d\0\h\b\0\t\c\v\6\h\4\2\i\1\c\g\4\n\1\6\c\m\4\k\r\5\r\n\0\b\b\m\a\i\0\2\n\d\l\i\l\t\4\m\x\d\a\6\g\9\h\n\7\k\q\x\u\9\8\w\r\g ]] 00:27:55.662 05:46:59 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:55.662 05:46:59 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:27:55.662 [2024-10-07 05:46:59.614775] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:55.662 [2024-10-07 05:46:59.615275] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178314 ] 00:27:55.921 [2024-10-07 05:46:59.784168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.180 [2024-10-07 05:46:59.950554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.377  Copying: 512/512 [B] (average 250 kBps) 00:27:57.377 00:27:57.377 ************************************ 00:27:57.377 END TEST dd_flags_misc 00:27:57.377 ************************************ 00:27:57.378 05:47:01 -- dd/posix.sh@93 -- # [[ xcg62o2fqsaxaohzc6fwhnzwm4wrl4n872uuu6yrkn9cfyc0hdyjt8ws3ru03i1uppu3mfh8faiogqz9qfwwwhrznas7nmoh7tu0ql4wvhkvmb32pzz4zamz09h5olxtpqnw2o3hakctvik8magvzefkgr0dlbpzmhpy41zqq2dh4x69z9c6d80e6uoo502tpfi3cccpbtnqfr3a78r59h576ncvkanhhy9pou0n7bzhjo7fwjivdgkku6w5wohmojnzpqs3dwyd6s2xrsuv28ytj3tockd5v8z7n6u9tx7jmtkryf62jmi3pd568vf2nz8qv5zyo3w94pjdspqax9hq8n06vcqlt1bkgr122jlsc622iqaauvhohhfu5akk6cs0igiwfwdagjufkjjdvjwppvvqpr5omtr88goqfanzagl4a85mndlhgq6f054n28d0hb0tcv6h42i1cg4n16cm4kr5rn0bbmai02ndlilt4mxda6g9hn7kqxu98wrg == \x\c\g\6\2\o\2\f\q\s\a\x\a\o\h\z\c\6\f\w\h\n\z\w\m\4\w\r\l\4\n\8\7\2\u\u\u\6\y\r\k\n\9\c\f\y\c\0\h\d\y\j\t\8\w\s\3\r\u\0\3\i\1\u\p\p\u\3\m\f\h\8\f\a\i\o\g\q\z\9\q\f\w\w\w\h\r\z\n\a\s\7\n\m\o\h\7\t\u\0\q\l\4\w\v\h\k\v\m\b\3\2\p\z\z\4\z\a\m\z\0\9\h\5\o\l\x\t\p\q\n\w\2\o\3\h\a\k\c\t\v\i\k\8\m\a\g\v\z\e\f\k\g\r\0\d\l\b\p\z\m\h\p\y\4\1\z\q\q\2\d\h\4\x\6\9\z\9\c\6\d\8\0\e\6\u\o\o\5\0\2\t\p\f\i\3\c\c\c\p\b\t\n\q\f\r\3\a\7\8\r\5\9\h\5\7\6\n\c\v\k\a\n\h\h\y\9\p\o\u\0\n\7\b\z\h\j\o\7\f\w\j\i\v\d\g\k\k\u\6\w\5\w\o\h\m\o\j\n\z\p\q\s\3\d\w\y\d\6\s\2\x\r\s\u\v\2\8\y\t\j\3\t\o\c\k\d\5\v\8\z\7\n\6\u\9\t\x\7\j\m\t\k\r\y\f\6\2\j\m\i\3\p\d\5\6\8\v\f\2\n\z\8\q\v\5\z\y\o\3\w\9\4\p\j\d\s\p\q\a\x\9\h\q\8\n\0\6\v\c\q\l\t\1\b\k\g\r\1\2\2\j\l\s\c\6\2\2\i\q\a\a\u\v\h\o\h\h\f\u\5\a\k\k\6\c\s\0\i\g\i\w\f\w\d\a\g\j\u\f\k\j\j\d\v\j\w\p\p\v\v\q\p\r\5\o\m\t\r\8\8\g\o\q\f\a\n\z\a\g\l\4\a\8\5\m\n\d\l\h\g\q\6\f\0\5\4\n\2\8\d\0\h\b\0\t\c\v\6\h\4\2\i\1\c\g\4\n\1\6\c\m\4\k\r\5\r\n\0\b\b\m\a\i\0\2\n\d\l\i\l\t\4\m\x\d\a\6\g\9\h\n\7\k\q\x\u\9\8\w\r\g ]] 00:27:57.378 00:27:57.378 real 0m12.828s 00:27:57.378 user 0m9.837s 00:27:57.378 sys 0m1.893s 00:27:57.378 05:47:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:57.378 05:47:01 -- common/autotest_common.sh@10 -- # set +x 00:27:57.378 05:47:01 -- dd/posix.sh@131 -- # tests_forced_aio 00:27:57.378 05:47:01 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:27:57.378 * Second test run, using AIO 00:27:57.378 05:47:01 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:27:57.378 05:47:01 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:27:57.378 05:47:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:57.378 05:47:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:57.378 05:47:01 -- common/autotest_common.sh@10 -- # set +x 00:27:57.378 ************************************ 00:27:57.378 START TEST dd_flag_append_forced_aio 00:27:57.378 ************************************ 00:27:57.378 05:47:01 -- common/autotest_common.sh@1104 -- # append 00:27:57.378 05:47:01 -- dd/posix.sh@16 -- # local dump0 00:27:57.378 05:47:01 -- dd/posix.sh@17 -- # local dump1 00:27:57.378 05:47:01 -- dd/posix.sh@19 -- # gen_bytes 32 00:27:57.378 05:47:01 -- dd/common.sh@98 -- # xtrace_disable 00:27:57.378 05:47:01 -- common/autotest_common.sh@10 -- # set +x 00:27:57.378 05:47:01 -- dd/posix.sh@19 -- # dump0=gqhuq9sjx0fwnyewrkrgxztmd5ejfgq1 00:27:57.378 05:47:01 -- dd/posix.sh@20 -- # gen_bytes 32 00:27:57.378 05:47:01 -- dd/common.sh@98 -- # xtrace_disable 00:27:57.378 05:47:01 -- common/autotest_common.sh@10 -- # set +x 00:27:57.378 05:47:01 -- dd/posix.sh@20 -- # dump1=kjarxx1e9yvvkzd4utfqwcyukwxus26i 00:27:57.378 05:47:01 -- dd/posix.sh@22 -- # printf %s gqhuq9sjx0fwnyewrkrgxztmd5ejfgq1 00:27:57.378 05:47:01 -- dd/posix.sh@23 -- # printf %s kjarxx1e9yvvkzd4utfqwcyukwxus26i 00:27:57.378 05:47:01 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:27:57.378 [2024-10-07 05:47:01.303623] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:57.378 [2024-10-07 05:47:01.303996] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178362 ] 00:27:57.637 [2024-10-07 05:47:01.471277] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.896 [2024-10-07 05:47:01.626279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.274  Copying: 32/32 [B] (average 31 kBps) 00:27:59.274 00:27:59.274 05:47:02 -- dd/posix.sh@27 -- # [[ kjarxx1e9yvvkzd4utfqwcyukwxus26igqhuq9sjx0fwnyewrkrgxztmd5ejfgq1 == \k\j\a\r\x\x\1\e\9\y\v\v\k\z\d\4\u\t\f\q\w\c\y\u\k\w\x\u\s\2\6\i\g\q\h\u\q\9\s\j\x\0\f\w\n\y\e\w\r\k\r\g\x\z\t\m\d\5\e\j\f\g\q\1 ]] 00:27:59.274 00:27:59.274 real 0m1.610s 00:27:59.274 user 0m1.248s 00:27:59.274 sys 0m0.218s 00:27:59.274 05:47:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:59.274 05:47:02 -- common/autotest_common.sh@10 -- # set +x 00:27:59.274 ************************************ 00:27:59.274 END TEST dd_flag_append_forced_aio 00:27:59.274 ************************************ 00:27:59.274 05:47:02 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:27:59.274 05:47:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:59.274 05:47:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:59.274 05:47:02 -- common/autotest_common.sh@10 -- # set +x 00:27:59.274 ************************************ 00:27:59.274 START TEST dd_flag_directory_forced_aio 00:27:59.274 ************************************ 00:27:59.274 05:47:02 -- common/autotest_common.sh@1104 -- # directory 00:27:59.274 05:47:02 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:59.274 05:47:02 -- common/autotest_common.sh@640 -- # local es=0 00:27:59.274 05:47:02 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:59.274 05:47:02 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:59.274 05:47:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:59.274 05:47:02 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:59.274 05:47:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:59.274 05:47:02 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:59.274 05:47:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:59.274 05:47:02 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:59.274 05:47:02 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:59.274 05:47:02 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:59.274 [2024-10-07 05:47:02.943946] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:27:59.274 [2024-10-07 05:47:02.944686] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178407 ] 00:27:59.274 [2024-10-07 05:47:03.098583] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.533 [2024-10-07 05:47:03.265337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.792 [2024-10-07 05:47:03.513232] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:59.792 [2024-10-07 05:47:03.513623] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:59.792 [2024-10-07 05:47:03.513694] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:00.360 [2024-10-07 05:47:04.089515] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:00.619 05:47:04 -- common/autotest_common.sh@643 -- # es=236 00:28:00.619 05:47:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:00.619 05:47:04 -- common/autotest_common.sh@652 -- # es=108 00:28:00.619 05:47:04 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:00.619 05:47:04 -- common/autotest_common.sh@660 -- # es=1 00:28:00.619 05:47:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:00.619 05:47:04 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:28:00.619 05:47:04 -- common/autotest_common.sh@640 -- # local es=0 00:28:00.619 05:47:04 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:28:00.619 05:47:04 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:00.619 05:47:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:00.619 05:47:04 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:00.619 05:47:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:00.619 05:47:04 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:00.619 05:47:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:00.620 05:47:04 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:00.620 05:47:04 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:00.620 05:47:04 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:28:00.620 [2024-10-07 05:47:04.492509] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:00.620 [2024-10-07 05:47:04.492982] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178435 ] 00:28:00.888 [2024-10-07 05:47:04.663598] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.888 [2024-10-07 05:47:04.830756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.171 [2024-10-07 05:47:05.080992] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:01.171 [2024-10-07 05:47:05.081356] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:01.171 [2024-10-07 05:47:05.081423] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:01.754 [2024-10-07 05:47:05.654725] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:02.012 05:47:05 -- common/autotest_common.sh@643 -- # es=236 00:28:02.012 05:47:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:02.012 05:47:05 -- common/autotest_common.sh@652 -- # es=108 00:28:02.012 05:47:05 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:02.012 05:47:05 -- common/autotest_common.sh@660 -- # es=1 00:28:02.012 ************************************ 00:28:02.012 END TEST dd_flag_directory_forced_aio 00:28:02.012 ************************************ 00:28:02.012 05:47:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:02.012 00:28:02.012 real 0m3.097s 00:28:02.012 user 0m2.443s 00:28:02.012 sys 0m0.448s 00:28:02.012 05:47:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:02.012 05:47:05 -- common/autotest_common.sh@10 -- # set +x 00:28:02.270 05:47:06 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:28:02.270 05:47:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:02.270 05:47:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:02.270 05:47:06 -- common/autotest_common.sh@10 -- # set +x 00:28:02.270 ************************************ 00:28:02.270 START TEST dd_flag_nofollow_forced_aio 00:28:02.270 ************************************ 00:28:02.270 05:47:06 -- common/autotest_common.sh@1104 -- # nofollow 00:28:02.270 05:47:06 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:28:02.270 05:47:06 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:28:02.270 05:47:06 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:28:02.270 05:47:06 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:28:02.270 05:47:06 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:02.270 05:47:06 -- common/autotest_common.sh@640 -- # local es=0 00:28:02.270 05:47:06 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:02.270 05:47:06 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:02.270 05:47:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:02.270 05:47:06 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:02.270 05:47:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:02.270 05:47:06 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:02.270 05:47:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:02.270 05:47:06 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:02.270 05:47:06 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:02.270 05:47:06 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:02.270 [2024-10-07 05:47:06.108725] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:02.270 [2024-10-07 05:47:06.109072] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178473 ] 00:28:02.528 [2024-10-07 05:47:06.261480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.528 [2024-10-07 05:47:06.428728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.786 [2024-10-07 05:47:06.676762] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:28:02.786 [2024-10-07 05:47:06.677146] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:28:02.786 [2024-10-07 05:47:06.677224] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:03.351 [2024-10-07 05:47:07.249563] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:03.609 05:47:07 -- common/autotest_common.sh@643 -- # es=216 00:28:03.609 05:47:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:03.609 05:47:07 -- common/autotest_common.sh@652 -- # es=88 00:28:03.609 05:47:07 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:03.609 05:47:07 -- common/autotest_common.sh@660 -- # es=1 00:28:03.609 05:47:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:03.609 05:47:07 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:28:03.609 05:47:07 -- common/autotest_common.sh@640 -- # local es=0 00:28:03.609 05:47:07 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:28:03.609 05:47:07 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:03.609 05:47:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:03.609 05:47:07 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:03.609 05:47:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:03.609 05:47:07 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:03.609 05:47:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:03.609 05:47:07 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:03.609 05:47:07 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:03.868 05:47:07 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:28:03.868 [2024-10-07 05:47:07.655025] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:03.868 [2024-10-07 05:47:07.655396] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178505 ] 00:28:03.868 [2024-10-07 05:47:07.823897] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.125 [2024-10-07 05:47:07.990095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.382 [2024-10-07 05:47:08.237510] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:28:04.382 [2024-10-07 05:47:08.237891] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:28:04.382 [2024-10-07 05:47:08.237964] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:04.948 [2024-10-07 05:47:08.813764] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:05.207 05:47:09 -- common/autotest_common.sh@643 -- # es=216 00:28:05.207 05:47:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:05.207 05:47:09 -- common/autotest_common.sh@652 -- # es=88 00:28:05.207 05:47:09 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:05.207 05:47:09 -- common/autotest_common.sh@660 -- # es=1 00:28:05.207 05:47:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:05.207 05:47:09 -- dd/posix.sh@46 -- # gen_bytes 512 00:28:05.207 05:47:09 -- dd/common.sh@98 -- # xtrace_disable 00:28:05.207 05:47:09 -- common/autotest_common.sh@10 -- # set +x 00:28:05.207 05:47:09 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:05.466 [2024-10-07 05:47:09.220563] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:05.466 [2024-10-07 05:47:09.220772] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178521 ] 00:28:05.466 [2024-10-07 05:47:09.391380] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.726 [2024-10-07 05:47:09.558151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.921  Copying: 512/512 [B] (average 500 kBps) 00:28:06.922 00:28:06.922 ************************************ 00:28:06.922 END TEST dd_flag_nofollow_forced_aio 00:28:06.922 ************************************ 00:28:06.922 05:47:10 -- dd/posix.sh@49 -- # [[ c64q8qz3saj1helqka4iy8hi2110axndyrzqta0musz2teholak2s9cw31xn5euziexs9urregyateb42jtj9upn87ci8toufnfvtf5ddh6zfgkqrtxrtnrrppgs12upxplmw0xwzh8s00zfn586s9lldfmcgrwskljeudsi2el4kbuvlgqgtao2ukhjfv0snnur5gte6uktrqnnlceaa5o1vw016ka7o48hkbyvcrf059jdy2f94wiaqiq2l28gv7n38fskv5n1bgahim7bw9cnugonwt7gib03ix6g5g430j0vrsh5vd0muu62gmr2yoavt4himt9sn0zmy0nm1l05qt0vsw82rxb1arxxgnkhwbp3ht047iiyd8y0exm44cc4eoio60utk2ckii7ne419hpptrnafcz77gybx4u9qsvfls0b8chdhkg8e3hck6sbnd5o29f8xmyawgtb1wznzagm2lmpa6pr2w0a2s4kbsinku0tdr3nlmkl6lu9e == \c\6\4\q\8\q\z\3\s\a\j\1\h\e\l\q\k\a\4\i\y\8\h\i\2\1\1\0\a\x\n\d\y\r\z\q\t\a\0\m\u\s\z\2\t\e\h\o\l\a\k\2\s\9\c\w\3\1\x\n\5\e\u\z\i\e\x\s\9\u\r\r\e\g\y\a\t\e\b\4\2\j\t\j\9\u\p\n\8\7\c\i\8\t\o\u\f\n\f\v\t\f\5\d\d\h\6\z\f\g\k\q\r\t\x\r\t\n\r\r\p\p\g\s\1\2\u\p\x\p\l\m\w\0\x\w\z\h\8\s\0\0\z\f\n\5\8\6\s\9\l\l\d\f\m\c\g\r\w\s\k\l\j\e\u\d\s\i\2\e\l\4\k\b\u\v\l\g\q\g\t\a\o\2\u\k\h\j\f\v\0\s\n\n\u\r\5\g\t\e\6\u\k\t\r\q\n\n\l\c\e\a\a\5\o\1\v\w\0\1\6\k\a\7\o\4\8\h\k\b\y\v\c\r\f\0\5\9\j\d\y\2\f\9\4\w\i\a\q\i\q\2\l\2\8\g\v\7\n\3\8\f\s\k\v\5\n\1\b\g\a\h\i\m\7\b\w\9\c\n\u\g\o\n\w\t\7\g\i\b\0\3\i\x\6\g\5\g\4\3\0\j\0\v\r\s\h\5\v\d\0\m\u\u\6\2\g\m\r\2\y\o\a\v\t\4\h\i\m\t\9\s\n\0\z\m\y\0\n\m\1\l\0\5\q\t\0\v\s\w\8\2\r\x\b\1\a\r\x\x\g\n\k\h\w\b\p\3\h\t\0\4\7\i\i\y\d\8\y\0\e\x\m\4\4\c\c\4\e\o\i\o\6\0\u\t\k\2\c\k\i\i\7\n\e\4\1\9\h\p\p\t\r\n\a\f\c\z\7\7\g\y\b\x\4\u\9\q\s\v\f\l\s\0\b\8\c\h\d\h\k\g\8\e\3\h\c\k\6\s\b\n\d\5\o\2\9\f\8\x\m\y\a\w\g\t\b\1\w\z\n\z\a\g\m\2\l\m\p\a\6\p\r\2\w\0\a\2\s\4\k\b\s\i\n\k\u\0\t\d\r\3\n\l\m\k\l\6\l\u\9\e ]] 00:28:06.922 00:28:06.922 real 0m4.726s 00:28:06.922 user 0m3.705s 00:28:06.922 sys 0m0.670s 00:28:06.922 05:47:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:06.922 05:47:10 -- common/autotest_common.sh@10 -- # set +x 00:28:06.922 05:47:10 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:28:06.922 05:47:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:06.922 05:47:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:06.922 05:47:10 -- common/autotest_common.sh@10 -- # set +x 00:28:06.922 ************************************ 00:28:06.922 START TEST dd_flag_noatime_forced_aio 00:28:06.922 ************************************ 00:28:06.922 05:47:10 -- common/autotest_common.sh@1104 -- # noatime 00:28:06.922 05:47:10 -- dd/posix.sh@53 -- # local atime_if 00:28:06.922 05:47:10 -- dd/posix.sh@54 -- # local atime_of 00:28:06.922 05:47:10 -- dd/posix.sh@58 -- # gen_bytes 512 00:28:06.922 05:47:10 -- dd/common.sh@98 -- # xtrace_disable 00:28:06.922 05:47:10 -- common/autotest_common.sh@10 -- # set +x 00:28:06.922 05:47:10 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:06.922 05:47:10 -- dd/posix.sh@60 -- # atime_if=1728280029 00:28:06.922 05:47:10 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:06.922 05:47:10 -- dd/posix.sh@61 -- # atime_of=1728280030 00:28:06.922 05:47:10 -- dd/posix.sh@66 -- # sleep 1 00:28:08.300 05:47:11 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:08.300 [2024-10-07 05:47:11.931202] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:08.300 [2024-10-07 05:47:11.931432] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178585 ] 00:28:08.300 [2024-10-07 05:47:12.100414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.301 [2024-10-07 05:47:12.268535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.496  Copying: 512/512 [B] (average 500 kBps) 00:28:09.496 00:28:09.755 05:47:13 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:09.755 05:47:13 -- dd/posix.sh@69 -- # (( atime_if == 1728280029 )) 00:28:09.755 05:47:13 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:09.755 05:47:13 -- dd/posix.sh@70 -- # (( atime_of == 1728280030 )) 00:28:09.755 05:47:13 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:09.755 [2024-10-07 05:47:13.553516] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:09.755 [2024-10-07 05:47:13.553727] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178613 ] 00:28:09.755 [2024-10-07 05:47:13.723161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.014 [2024-10-07 05:47:13.902968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.209  Copying: 512/512 [B] (average 500 kBps) 00:28:11.209 00:28:11.209 05:47:15 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:11.209 05:47:15 -- dd/posix.sh@73 -- # (( atime_if < 1728280034 )) 00:28:11.209 00:28:11.209 real 0m4.269s 00:28:11.209 user 0m2.522s 00:28:11.209 sys 0m0.477s 00:28:11.209 ************************************ 00:28:11.209 END TEST dd_flag_noatime_forced_aio 00:28:11.209 ************************************ 00:28:11.209 05:47:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:11.209 05:47:15 -- common/autotest_common.sh@10 -- # set +x 00:28:11.209 05:47:15 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:28:11.209 05:47:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:11.209 05:47:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:11.209 05:47:15 -- common/autotest_common.sh@10 -- # set +x 00:28:11.209 ************************************ 00:28:11.209 START TEST dd_flags_misc_forced_aio 00:28:11.209 ************************************ 00:28:11.209 05:47:15 -- common/autotest_common.sh@1104 -- # io 00:28:11.209 05:47:15 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:28:11.209 05:47:15 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:28:11.209 05:47:15 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:28:11.209 05:47:15 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:28:11.209 05:47:15 -- dd/posix.sh@86 -- # gen_bytes 512 00:28:11.209 05:47:15 -- dd/common.sh@98 -- # xtrace_disable 00:28:11.209 05:47:15 -- common/autotest_common.sh@10 -- # set +x 00:28:11.209 05:47:15 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:11.209 05:47:15 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:28:11.468 [2024-10-07 05:47:15.219567] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:11.468 [2024-10-07 05:47:15.219780] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178655 ] 00:28:11.468 [2024-10-07 05:47:15.388169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.727 [2024-10-07 05:47:15.557434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.921  Copying: 512/512 [B] (average 125 kBps) 00:28:12.921 00:28:12.921 05:47:16 -- dd/posix.sh@93 -- # [[ 7tgclmrilkotc06mgd8sdfh8c9blil80fpbe5ppts75yjdzijn2wwohosxwlsr4427s9wnejjwwxtkje7o8nryj17aj79aujbznsv6y7obgb8p3i6ces7w6w5syr3w7gn5wxmckcc95kzza2pm2k3nwj5c75q5m3255ceohdjhd2iid9pz4yb0nndythxptm05u4nree5ulxoojja06grt8dwas814tnebpi8tff7g9fhnaikpp3dl4pxregtj6ufeuo1jnb5qe7ozdsramebjvy8hf7n48yrlvoay1i52w7t3izthhfhi81wcduzlwv85loe0la3q6j1fc6xdifba3y2okd4jtjy9garajkm88s6ftuowspflto54l99xzqj57zesifxium7f2vdfcxuwvk5cwlu37vvcrw04ur86pbmhmh3yupjhksbacbpraqot2xl83g4is8cfu6n8vs0guc9q96ki3n6h6e21v6oqtr4lys4a9kupbs01b7wuep == \7\t\g\c\l\m\r\i\l\k\o\t\c\0\6\m\g\d\8\s\d\f\h\8\c\9\b\l\i\l\8\0\f\p\b\e\5\p\p\t\s\7\5\y\j\d\z\i\j\n\2\w\w\o\h\o\s\x\w\l\s\r\4\4\2\7\s\9\w\n\e\j\j\w\w\x\t\k\j\e\7\o\8\n\r\y\j\1\7\a\j\7\9\a\u\j\b\z\n\s\v\6\y\7\o\b\g\b\8\p\3\i\6\c\e\s\7\w\6\w\5\s\y\r\3\w\7\g\n\5\w\x\m\c\k\c\c\9\5\k\z\z\a\2\p\m\2\k\3\n\w\j\5\c\7\5\q\5\m\3\2\5\5\c\e\o\h\d\j\h\d\2\i\i\d\9\p\z\4\y\b\0\n\n\d\y\t\h\x\p\t\m\0\5\u\4\n\r\e\e\5\u\l\x\o\o\j\j\a\0\6\g\r\t\8\d\w\a\s\8\1\4\t\n\e\b\p\i\8\t\f\f\7\g\9\f\h\n\a\i\k\p\p\3\d\l\4\p\x\r\e\g\t\j\6\u\f\e\u\o\1\j\n\b\5\q\e\7\o\z\d\s\r\a\m\e\b\j\v\y\8\h\f\7\n\4\8\y\r\l\v\o\a\y\1\i\5\2\w\7\t\3\i\z\t\h\h\f\h\i\8\1\w\c\d\u\z\l\w\v\8\5\l\o\e\0\l\a\3\q\6\j\1\f\c\6\x\d\i\f\b\a\3\y\2\o\k\d\4\j\t\j\y\9\g\a\r\a\j\k\m\8\8\s\6\f\t\u\o\w\s\p\f\l\t\o\5\4\l\9\9\x\z\q\j\5\7\z\e\s\i\f\x\i\u\m\7\f\2\v\d\f\c\x\u\w\v\k\5\c\w\l\u\3\7\v\v\c\r\w\0\4\u\r\8\6\p\b\m\h\m\h\3\y\u\p\j\h\k\s\b\a\c\b\p\r\a\q\o\t\2\x\l\8\3\g\4\i\s\8\c\f\u\6\n\8\v\s\0\g\u\c\9\q\9\6\k\i\3\n\6\h\6\e\2\1\v\6\o\q\t\r\4\l\y\s\4\a\9\k\u\p\b\s\0\1\b\7\w\u\e\p ]] 00:28:12.921 05:47:16 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:12.921 05:47:16 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:28:12.921 [2024-10-07 05:47:16.860527] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:12.921 [2024-10-07 05:47:16.860750] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178682 ] 00:28:13.180 [2024-10-07 05:47:17.031285] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.439 [2024-10-07 05:47:17.197936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.632  Copying: 512/512 [B] (average 500 kBps) 00:28:14.632 00:28:14.632 05:47:18 -- dd/posix.sh@93 -- # [[ 7tgclmrilkotc06mgd8sdfh8c9blil80fpbe5ppts75yjdzijn2wwohosxwlsr4427s9wnejjwwxtkje7o8nryj17aj79aujbznsv6y7obgb8p3i6ces7w6w5syr3w7gn5wxmckcc95kzza2pm2k3nwj5c75q5m3255ceohdjhd2iid9pz4yb0nndythxptm05u4nree5ulxoojja06grt8dwas814tnebpi8tff7g9fhnaikpp3dl4pxregtj6ufeuo1jnb5qe7ozdsramebjvy8hf7n48yrlvoay1i52w7t3izthhfhi81wcduzlwv85loe0la3q6j1fc6xdifba3y2okd4jtjy9garajkm88s6ftuowspflto54l99xzqj57zesifxium7f2vdfcxuwvk5cwlu37vvcrw04ur86pbmhmh3yupjhksbacbpraqot2xl83g4is8cfu6n8vs0guc9q96ki3n6h6e21v6oqtr4lys4a9kupbs01b7wuep == \7\t\g\c\l\m\r\i\l\k\o\t\c\0\6\m\g\d\8\s\d\f\h\8\c\9\b\l\i\l\8\0\f\p\b\e\5\p\p\t\s\7\5\y\j\d\z\i\j\n\2\w\w\o\h\o\s\x\w\l\s\r\4\4\2\7\s\9\w\n\e\j\j\w\w\x\t\k\j\e\7\o\8\n\r\y\j\1\7\a\j\7\9\a\u\j\b\z\n\s\v\6\y\7\o\b\g\b\8\p\3\i\6\c\e\s\7\w\6\w\5\s\y\r\3\w\7\g\n\5\w\x\m\c\k\c\c\9\5\k\z\z\a\2\p\m\2\k\3\n\w\j\5\c\7\5\q\5\m\3\2\5\5\c\e\o\h\d\j\h\d\2\i\i\d\9\p\z\4\y\b\0\n\n\d\y\t\h\x\p\t\m\0\5\u\4\n\r\e\e\5\u\l\x\o\o\j\j\a\0\6\g\r\t\8\d\w\a\s\8\1\4\t\n\e\b\p\i\8\t\f\f\7\g\9\f\h\n\a\i\k\p\p\3\d\l\4\p\x\r\e\g\t\j\6\u\f\e\u\o\1\j\n\b\5\q\e\7\o\z\d\s\r\a\m\e\b\j\v\y\8\h\f\7\n\4\8\y\r\l\v\o\a\y\1\i\5\2\w\7\t\3\i\z\t\h\h\f\h\i\8\1\w\c\d\u\z\l\w\v\8\5\l\o\e\0\l\a\3\q\6\j\1\f\c\6\x\d\i\f\b\a\3\y\2\o\k\d\4\j\t\j\y\9\g\a\r\a\j\k\m\8\8\s\6\f\t\u\o\w\s\p\f\l\t\o\5\4\l\9\9\x\z\q\j\5\7\z\e\s\i\f\x\i\u\m\7\f\2\v\d\f\c\x\u\w\v\k\5\c\w\l\u\3\7\v\v\c\r\w\0\4\u\r\8\6\p\b\m\h\m\h\3\y\u\p\j\h\k\s\b\a\c\b\p\r\a\q\o\t\2\x\l\8\3\g\4\i\s\8\c\f\u\6\n\8\v\s\0\g\u\c\9\q\9\6\k\i\3\n\6\h\6\e\2\1\v\6\o\q\t\r\4\l\y\s\4\a\9\k\u\p\b\s\0\1\b\7\w\u\e\p ]] 00:28:14.632 05:47:18 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:14.632 05:47:18 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:28:14.632 [2024-10-07 05:47:18.458815] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:14.632 [2024-10-07 05:47:18.459021] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178699 ] 00:28:14.890 [2024-10-07 05:47:18.631878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.890 [2024-10-07 05:47:18.800169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.085  Copying: 512/512 [B] (average 166 kBps) 00:28:16.085 00:28:16.085 05:47:19 -- dd/posix.sh@93 -- # [[ 7tgclmrilkotc06mgd8sdfh8c9blil80fpbe5ppts75yjdzijn2wwohosxwlsr4427s9wnejjwwxtkje7o8nryj17aj79aujbznsv6y7obgb8p3i6ces7w6w5syr3w7gn5wxmckcc95kzza2pm2k3nwj5c75q5m3255ceohdjhd2iid9pz4yb0nndythxptm05u4nree5ulxoojja06grt8dwas814tnebpi8tff7g9fhnaikpp3dl4pxregtj6ufeuo1jnb5qe7ozdsramebjvy8hf7n48yrlvoay1i52w7t3izthhfhi81wcduzlwv85loe0la3q6j1fc6xdifba3y2okd4jtjy9garajkm88s6ftuowspflto54l99xzqj57zesifxium7f2vdfcxuwvk5cwlu37vvcrw04ur86pbmhmh3yupjhksbacbpraqot2xl83g4is8cfu6n8vs0guc9q96ki3n6h6e21v6oqtr4lys4a9kupbs01b7wuep == \7\t\g\c\l\m\r\i\l\k\o\t\c\0\6\m\g\d\8\s\d\f\h\8\c\9\b\l\i\l\8\0\f\p\b\e\5\p\p\t\s\7\5\y\j\d\z\i\j\n\2\w\w\o\h\o\s\x\w\l\s\r\4\4\2\7\s\9\w\n\e\j\j\w\w\x\t\k\j\e\7\o\8\n\r\y\j\1\7\a\j\7\9\a\u\j\b\z\n\s\v\6\y\7\o\b\g\b\8\p\3\i\6\c\e\s\7\w\6\w\5\s\y\r\3\w\7\g\n\5\w\x\m\c\k\c\c\9\5\k\z\z\a\2\p\m\2\k\3\n\w\j\5\c\7\5\q\5\m\3\2\5\5\c\e\o\h\d\j\h\d\2\i\i\d\9\p\z\4\y\b\0\n\n\d\y\t\h\x\p\t\m\0\5\u\4\n\r\e\e\5\u\l\x\o\o\j\j\a\0\6\g\r\t\8\d\w\a\s\8\1\4\t\n\e\b\p\i\8\t\f\f\7\g\9\f\h\n\a\i\k\p\p\3\d\l\4\p\x\r\e\g\t\j\6\u\f\e\u\o\1\j\n\b\5\q\e\7\o\z\d\s\r\a\m\e\b\j\v\y\8\h\f\7\n\4\8\y\r\l\v\o\a\y\1\i\5\2\w\7\t\3\i\z\t\h\h\f\h\i\8\1\w\c\d\u\z\l\w\v\8\5\l\o\e\0\l\a\3\q\6\j\1\f\c\6\x\d\i\f\b\a\3\y\2\o\k\d\4\j\t\j\y\9\g\a\r\a\j\k\m\8\8\s\6\f\t\u\o\w\s\p\f\l\t\o\5\4\l\9\9\x\z\q\j\5\7\z\e\s\i\f\x\i\u\m\7\f\2\v\d\f\c\x\u\w\v\k\5\c\w\l\u\3\7\v\v\c\r\w\0\4\u\r\8\6\p\b\m\h\m\h\3\y\u\p\j\h\k\s\b\a\c\b\p\r\a\q\o\t\2\x\l\8\3\g\4\i\s\8\c\f\u\6\n\8\v\s\0\g\u\c\9\q\9\6\k\i\3\n\6\h\6\e\2\1\v\6\o\q\t\r\4\l\y\s\4\a\9\k\u\p\b\s\0\1\b\7\w\u\e\p ]] 00:28:16.085 05:47:19 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:16.085 05:47:19 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:28:16.344 [2024-10-07 05:47:20.067683] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:16.344 [2024-10-07 05:47:20.067952] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178727 ] 00:28:16.344 [2024-10-07 05:47:20.236137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.602 [2024-10-07 05:47:20.392572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.837  Copying: 512/512 [B] (average 166 kBps) 00:28:17.837 00:28:17.837 05:47:21 -- dd/posix.sh@93 -- # [[ 7tgclmrilkotc06mgd8sdfh8c9blil80fpbe5ppts75yjdzijn2wwohosxwlsr4427s9wnejjwwxtkje7o8nryj17aj79aujbznsv6y7obgb8p3i6ces7w6w5syr3w7gn5wxmckcc95kzza2pm2k3nwj5c75q5m3255ceohdjhd2iid9pz4yb0nndythxptm05u4nree5ulxoojja06grt8dwas814tnebpi8tff7g9fhnaikpp3dl4pxregtj6ufeuo1jnb5qe7ozdsramebjvy8hf7n48yrlvoay1i52w7t3izthhfhi81wcduzlwv85loe0la3q6j1fc6xdifba3y2okd4jtjy9garajkm88s6ftuowspflto54l99xzqj57zesifxium7f2vdfcxuwvk5cwlu37vvcrw04ur86pbmhmh3yupjhksbacbpraqot2xl83g4is8cfu6n8vs0guc9q96ki3n6h6e21v6oqtr4lys4a9kupbs01b7wuep == \7\t\g\c\l\m\r\i\l\k\o\t\c\0\6\m\g\d\8\s\d\f\h\8\c\9\b\l\i\l\8\0\f\p\b\e\5\p\p\t\s\7\5\y\j\d\z\i\j\n\2\w\w\o\h\o\s\x\w\l\s\r\4\4\2\7\s\9\w\n\e\j\j\w\w\x\t\k\j\e\7\o\8\n\r\y\j\1\7\a\j\7\9\a\u\j\b\z\n\s\v\6\y\7\o\b\g\b\8\p\3\i\6\c\e\s\7\w\6\w\5\s\y\r\3\w\7\g\n\5\w\x\m\c\k\c\c\9\5\k\z\z\a\2\p\m\2\k\3\n\w\j\5\c\7\5\q\5\m\3\2\5\5\c\e\o\h\d\j\h\d\2\i\i\d\9\p\z\4\y\b\0\n\n\d\y\t\h\x\p\t\m\0\5\u\4\n\r\e\e\5\u\l\x\o\o\j\j\a\0\6\g\r\t\8\d\w\a\s\8\1\4\t\n\e\b\p\i\8\t\f\f\7\g\9\f\h\n\a\i\k\p\p\3\d\l\4\p\x\r\e\g\t\j\6\u\f\e\u\o\1\j\n\b\5\q\e\7\o\z\d\s\r\a\m\e\b\j\v\y\8\h\f\7\n\4\8\y\r\l\v\o\a\y\1\i\5\2\w\7\t\3\i\z\t\h\h\f\h\i\8\1\w\c\d\u\z\l\w\v\8\5\l\o\e\0\l\a\3\q\6\j\1\f\c\6\x\d\i\f\b\a\3\y\2\o\k\d\4\j\t\j\y\9\g\a\r\a\j\k\m\8\8\s\6\f\t\u\o\w\s\p\f\l\t\o\5\4\l\9\9\x\z\q\j\5\7\z\e\s\i\f\x\i\u\m\7\f\2\v\d\f\c\x\u\w\v\k\5\c\w\l\u\3\7\v\v\c\r\w\0\4\u\r\8\6\p\b\m\h\m\h\3\y\u\p\j\h\k\s\b\a\c\b\p\r\a\q\o\t\2\x\l\8\3\g\4\i\s\8\c\f\u\6\n\8\v\s\0\g\u\c\9\q\9\6\k\i\3\n\6\h\6\e\2\1\v\6\o\q\t\r\4\l\y\s\4\a\9\k\u\p\b\s\0\1\b\7\w\u\e\p ]] 00:28:17.837 05:47:21 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:28:17.837 05:47:21 -- dd/posix.sh@86 -- # gen_bytes 512 00:28:17.837 05:47:21 -- dd/common.sh@98 -- # xtrace_disable 00:28:17.837 05:47:21 -- common/autotest_common.sh@10 -- # set +x 00:28:17.837 05:47:21 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:17.837 05:47:21 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:28:17.837 [2024-10-07 05:47:21.724417] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:17.837 [2024-10-07 05:47:21.725159] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178752 ] 00:28:18.102 [2024-10-07 05:47:21.892254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.360 [2024-10-07 05:47:22.082065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.554  Copying: 512/512 [B] (average 500 kBps) 00:28:19.554 00:28:19.554 05:47:23 -- dd/posix.sh@93 -- # [[ xrzmtuvbzukil43xaisojxv0m64fugwhhmtrn2rtockzs93jvf7ou8ywhnmpgf39r9rujs8bopaghmykg2toq3u0pje7rreigahkshcw8h8cdjqj6gb5ptccv6g2iwjj51ja3zwfs6dr3c56m4ue8v8az7hxsqoezh9gn6534pc43e49v29tlaaj84gsbs9zoxzr3xvmfi05jqbcbhqrsdjrcxt8k6blt7g0mbalwjic437dazf33xdusqc0det7e2vjp8h887m72mkr81q4i9gq1kzjizt4oiklm3pjje0ad0feuje60eee9ycuxanda5vm5rqxapvxugtlzpnbjowx1dp3cyk8a2g1rsqbefcakt5cbvpsb2bfzrgzqv55mjc2sfebvfo5etq77vhay50be279qx7lqcyweapirm6l7tjdghwinpu2fj61jlz0e80dt5ucqv6uqrhb3zub8ri1h9acvsqq48vx5z4ex9eh5ajcto4so7y4csdr7gvo == \x\r\z\m\t\u\v\b\z\u\k\i\l\4\3\x\a\i\s\o\j\x\v\0\m\6\4\f\u\g\w\h\h\m\t\r\n\2\r\t\o\c\k\z\s\9\3\j\v\f\7\o\u\8\y\w\h\n\m\p\g\f\3\9\r\9\r\u\j\s\8\b\o\p\a\g\h\m\y\k\g\2\t\o\q\3\u\0\p\j\e\7\r\r\e\i\g\a\h\k\s\h\c\w\8\h\8\c\d\j\q\j\6\g\b\5\p\t\c\c\v\6\g\2\i\w\j\j\5\1\j\a\3\z\w\f\s\6\d\r\3\c\5\6\m\4\u\e\8\v\8\a\z\7\h\x\s\q\o\e\z\h\9\g\n\6\5\3\4\p\c\4\3\e\4\9\v\2\9\t\l\a\a\j\8\4\g\s\b\s\9\z\o\x\z\r\3\x\v\m\f\i\0\5\j\q\b\c\b\h\q\r\s\d\j\r\c\x\t\8\k\6\b\l\t\7\g\0\m\b\a\l\w\j\i\c\4\3\7\d\a\z\f\3\3\x\d\u\s\q\c\0\d\e\t\7\e\2\v\j\p\8\h\8\8\7\m\7\2\m\k\r\8\1\q\4\i\9\g\q\1\k\z\j\i\z\t\4\o\i\k\l\m\3\p\j\j\e\0\a\d\0\f\e\u\j\e\6\0\e\e\e\9\y\c\u\x\a\n\d\a\5\v\m\5\r\q\x\a\p\v\x\u\g\t\l\z\p\n\b\j\o\w\x\1\d\p\3\c\y\k\8\a\2\g\1\r\s\q\b\e\f\c\a\k\t\5\c\b\v\p\s\b\2\b\f\z\r\g\z\q\v\5\5\m\j\c\2\s\f\e\b\v\f\o\5\e\t\q\7\7\v\h\a\y\5\0\b\e\2\7\9\q\x\7\l\q\c\y\w\e\a\p\i\r\m\6\l\7\t\j\d\g\h\w\i\n\p\u\2\f\j\6\1\j\l\z\0\e\8\0\d\t\5\u\c\q\v\6\u\q\r\h\b\3\z\u\b\8\r\i\1\h\9\a\c\v\s\q\q\4\8\v\x\5\z\4\e\x\9\e\h\5\a\j\c\t\o\4\s\o\7\y\4\c\s\d\r\7\g\v\o ]] 00:28:19.554 05:47:23 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:19.554 05:47:23 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:28:19.554 [2024-10-07 05:47:23.497939] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:19.554 [2024-10-07 05:47:23.498147] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178771 ] 00:28:19.812 [2024-10-07 05:47:23.667657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.071 [2024-10-07 05:47:23.867115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.266  Copying: 512/512 [B] (average 500 kBps) 00:28:21.266 00:28:21.266 05:47:25 -- dd/posix.sh@93 -- # [[ xrzmtuvbzukil43xaisojxv0m64fugwhhmtrn2rtockzs93jvf7ou8ywhnmpgf39r9rujs8bopaghmykg2toq3u0pje7rreigahkshcw8h8cdjqj6gb5ptccv6g2iwjj51ja3zwfs6dr3c56m4ue8v8az7hxsqoezh9gn6534pc43e49v29tlaaj84gsbs9zoxzr3xvmfi05jqbcbhqrsdjrcxt8k6blt7g0mbalwjic437dazf33xdusqc0det7e2vjp8h887m72mkr81q4i9gq1kzjizt4oiklm3pjje0ad0feuje60eee9ycuxanda5vm5rqxapvxugtlzpnbjowx1dp3cyk8a2g1rsqbefcakt5cbvpsb2bfzrgzqv55mjc2sfebvfo5etq77vhay50be279qx7lqcyweapirm6l7tjdghwinpu2fj61jlz0e80dt5ucqv6uqrhb3zub8ri1h9acvsqq48vx5z4ex9eh5ajcto4so7y4csdr7gvo == \x\r\z\m\t\u\v\b\z\u\k\i\l\4\3\x\a\i\s\o\j\x\v\0\m\6\4\f\u\g\w\h\h\m\t\r\n\2\r\t\o\c\k\z\s\9\3\j\v\f\7\o\u\8\y\w\h\n\m\p\g\f\3\9\r\9\r\u\j\s\8\b\o\p\a\g\h\m\y\k\g\2\t\o\q\3\u\0\p\j\e\7\r\r\e\i\g\a\h\k\s\h\c\w\8\h\8\c\d\j\q\j\6\g\b\5\p\t\c\c\v\6\g\2\i\w\j\j\5\1\j\a\3\z\w\f\s\6\d\r\3\c\5\6\m\4\u\e\8\v\8\a\z\7\h\x\s\q\o\e\z\h\9\g\n\6\5\3\4\p\c\4\3\e\4\9\v\2\9\t\l\a\a\j\8\4\g\s\b\s\9\z\o\x\z\r\3\x\v\m\f\i\0\5\j\q\b\c\b\h\q\r\s\d\j\r\c\x\t\8\k\6\b\l\t\7\g\0\m\b\a\l\w\j\i\c\4\3\7\d\a\z\f\3\3\x\d\u\s\q\c\0\d\e\t\7\e\2\v\j\p\8\h\8\8\7\m\7\2\m\k\r\8\1\q\4\i\9\g\q\1\k\z\j\i\z\t\4\o\i\k\l\m\3\p\j\j\e\0\a\d\0\f\e\u\j\e\6\0\e\e\e\9\y\c\u\x\a\n\d\a\5\v\m\5\r\q\x\a\p\v\x\u\g\t\l\z\p\n\b\j\o\w\x\1\d\p\3\c\y\k\8\a\2\g\1\r\s\q\b\e\f\c\a\k\t\5\c\b\v\p\s\b\2\b\f\z\r\g\z\q\v\5\5\m\j\c\2\s\f\e\b\v\f\o\5\e\t\q\7\7\v\h\a\y\5\0\b\e\2\7\9\q\x\7\l\q\c\y\w\e\a\p\i\r\m\6\l\7\t\j\d\g\h\w\i\n\p\u\2\f\j\6\1\j\l\z\0\e\8\0\d\t\5\u\c\q\v\6\u\q\r\h\b\3\z\u\b\8\r\i\1\h\9\a\c\v\s\q\q\4\8\v\x\5\z\4\e\x\9\e\h\5\a\j\c\t\o\4\s\o\7\y\4\c\s\d\r\7\g\v\o ]] 00:28:21.266 05:47:25 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:21.266 05:47:25 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:28:21.525 [2024-10-07 05:47:25.285801] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:21.525 [2024-10-07 05:47:25.286648] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178800 ] 00:28:21.525 [2024-10-07 05:47:25.455541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.785 [2024-10-07 05:47:25.655902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.018  Copying: 512/512 [B] (average 125 kBps) 00:28:23.018 00:28:23.019 05:47:26 -- dd/posix.sh@93 -- # [[ xrzmtuvbzukil43xaisojxv0m64fugwhhmtrn2rtockzs93jvf7ou8ywhnmpgf39r9rujs8bopaghmykg2toq3u0pje7rreigahkshcw8h8cdjqj6gb5ptccv6g2iwjj51ja3zwfs6dr3c56m4ue8v8az7hxsqoezh9gn6534pc43e49v29tlaaj84gsbs9zoxzr3xvmfi05jqbcbhqrsdjrcxt8k6blt7g0mbalwjic437dazf33xdusqc0det7e2vjp8h887m72mkr81q4i9gq1kzjizt4oiklm3pjje0ad0feuje60eee9ycuxanda5vm5rqxapvxugtlzpnbjowx1dp3cyk8a2g1rsqbefcakt5cbvpsb2bfzrgzqv55mjc2sfebvfo5etq77vhay50be279qx7lqcyweapirm6l7tjdghwinpu2fj61jlz0e80dt5ucqv6uqrhb3zub8ri1h9acvsqq48vx5z4ex9eh5ajcto4so7y4csdr7gvo == \x\r\z\m\t\u\v\b\z\u\k\i\l\4\3\x\a\i\s\o\j\x\v\0\m\6\4\f\u\g\w\h\h\m\t\r\n\2\r\t\o\c\k\z\s\9\3\j\v\f\7\o\u\8\y\w\h\n\m\p\g\f\3\9\r\9\r\u\j\s\8\b\o\p\a\g\h\m\y\k\g\2\t\o\q\3\u\0\p\j\e\7\r\r\e\i\g\a\h\k\s\h\c\w\8\h\8\c\d\j\q\j\6\g\b\5\p\t\c\c\v\6\g\2\i\w\j\j\5\1\j\a\3\z\w\f\s\6\d\r\3\c\5\6\m\4\u\e\8\v\8\a\z\7\h\x\s\q\o\e\z\h\9\g\n\6\5\3\4\p\c\4\3\e\4\9\v\2\9\t\l\a\a\j\8\4\g\s\b\s\9\z\o\x\z\r\3\x\v\m\f\i\0\5\j\q\b\c\b\h\q\r\s\d\j\r\c\x\t\8\k\6\b\l\t\7\g\0\m\b\a\l\w\j\i\c\4\3\7\d\a\z\f\3\3\x\d\u\s\q\c\0\d\e\t\7\e\2\v\j\p\8\h\8\8\7\m\7\2\m\k\r\8\1\q\4\i\9\g\q\1\k\z\j\i\z\t\4\o\i\k\l\m\3\p\j\j\e\0\a\d\0\f\e\u\j\e\6\0\e\e\e\9\y\c\u\x\a\n\d\a\5\v\m\5\r\q\x\a\p\v\x\u\g\t\l\z\p\n\b\j\o\w\x\1\d\p\3\c\y\k\8\a\2\g\1\r\s\q\b\e\f\c\a\k\t\5\c\b\v\p\s\b\2\b\f\z\r\g\z\q\v\5\5\m\j\c\2\s\f\e\b\v\f\o\5\e\t\q\7\7\v\h\a\y\5\0\b\e\2\7\9\q\x\7\l\q\c\y\w\e\a\p\i\r\m\6\l\7\t\j\d\g\h\w\i\n\p\u\2\f\j\6\1\j\l\z\0\e\8\0\d\t\5\u\c\q\v\6\u\q\r\h\b\3\z\u\b\8\r\i\1\h\9\a\c\v\s\q\q\4\8\v\x\5\z\4\e\x\9\e\h\5\a\j\c\t\o\4\s\o\7\y\4\c\s\d\r\7\g\v\o ]] 00:28:23.019 05:47:26 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:23.019 05:47:26 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:28:23.278 [2024-10-07 05:47:27.034532] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:23.278 [2024-10-07 05:47:27.034720] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178824 ] 00:28:23.278 [2024-10-07 05:47:27.187104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.537 [2024-10-07 05:47:27.372645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.732  Copying: 512/512 [B] (average 166 kBps) 00:28:24.732 00:28:24.732 ************************************ 00:28:24.732 END TEST dd_flags_misc_forced_aio 00:28:24.732 ************************************ 00:28:24.732 05:47:28 -- dd/posix.sh@93 -- # [[ xrzmtuvbzukil43xaisojxv0m64fugwhhmtrn2rtockzs93jvf7ou8ywhnmpgf39r9rujs8bopaghmykg2toq3u0pje7rreigahkshcw8h8cdjqj6gb5ptccv6g2iwjj51ja3zwfs6dr3c56m4ue8v8az7hxsqoezh9gn6534pc43e49v29tlaaj84gsbs9zoxzr3xvmfi05jqbcbhqrsdjrcxt8k6blt7g0mbalwjic437dazf33xdusqc0det7e2vjp8h887m72mkr81q4i9gq1kzjizt4oiklm3pjje0ad0feuje60eee9ycuxanda5vm5rqxapvxugtlzpnbjowx1dp3cyk8a2g1rsqbefcakt5cbvpsb2bfzrgzqv55mjc2sfebvfo5etq77vhay50be279qx7lqcyweapirm6l7tjdghwinpu2fj61jlz0e80dt5ucqv6uqrhb3zub8ri1h9acvsqq48vx5z4ex9eh5ajcto4so7y4csdr7gvo == \x\r\z\m\t\u\v\b\z\u\k\i\l\4\3\x\a\i\s\o\j\x\v\0\m\6\4\f\u\g\w\h\h\m\t\r\n\2\r\t\o\c\k\z\s\9\3\j\v\f\7\o\u\8\y\w\h\n\m\p\g\f\3\9\r\9\r\u\j\s\8\b\o\p\a\g\h\m\y\k\g\2\t\o\q\3\u\0\p\j\e\7\r\r\e\i\g\a\h\k\s\h\c\w\8\h\8\c\d\j\q\j\6\g\b\5\p\t\c\c\v\6\g\2\i\w\j\j\5\1\j\a\3\z\w\f\s\6\d\r\3\c\5\6\m\4\u\e\8\v\8\a\z\7\h\x\s\q\o\e\z\h\9\g\n\6\5\3\4\p\c\4\3\e\4\9\v\2\9\t\l\a\a\j\8\4\g\s\b\s\9\z\o\x\z\r\3\x\v\m\f\i\0\5\j\q\b\c\b\h\q\r\s\d\j\r\c\x\t\8\k\6\b\l\t\7\g\0\m\b\a\l\w\j\i\c\4\3\7\d\a\z\f\3\3\x\d\u\s\q\c\0\d\e\t\7\e\2\v\j\p\8\h\8\8\7\m\7\2\m\k\r\8\1\q\4\i\9\g\q\1\k\z\j\i\z\t\4\o\i\k\l\m\3\p\j\j\e\0\a\d\0\f\e\u\j\e\6\0\e\e\e\9\y\c\u\x\a\n\d\a\5\v\m\5\r\q\x\a\p\v\x\u\g\t\l\z\p\n\b\j\o\w\x\1\d\p\3\c\y\k\8\a\2\g\1\r\s\q\b\e\f\c\a\k\t\5\c\b\v\p\s\b\2\b\f\z\r\g\z\q\v\5\5\m\j\c\2\s\f\e\b\v\f\o\5\e\t\q\7\7\v\h\a\y\5\0\b\e\2\7\9\q\x\7\l\q\c\y\w\e\a\p\i\r\m\6\l\7\t\j\d\g\h\w\i\n\p\u\2\f\j\6\1\j\l\z\0\e\8\0\d\t\5\u\c\q\v\6\u\q\r\h\b\3\z\u\b\8\r\i\1\h\9\a\c\v\s\q\q\4\8\v\x\5\z\4\e\x\9\e\h\5\a\j\c\t\o\4\s\o\7\y\4\c\s\d\r\7\g\v\o ]] 00:28:24.732 00:28:24.732 real 0m13.560s 00:28:24.732 user 0m10.492s 00:28:24.732 sys 0m1.938s 00:28:24.732 05:47:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:24.732 05:47:28 -- common/autotest_common.sh@10 -- # set +x 00:28:24.991 05:47:28 -- dd/posix.sh@1 -- # cleanup 00:28:24.991 05:47:28 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:28:24.991 05:47:28 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:28:24.991 ************************************ 00:28:24.991 END TEST spdk_dd_posix 00:28:24.991 ************************************ 00:28:24.991 00:28:24.991 real 0m55.200s 00:28:24.991 user 0m40.937s 00:28:24.991 sys 0m8.085s 00:28:24.991 05:47:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:24.991 05:47:28 -- common/autotest_common.sh@10 -- # set +x 00:28:24.991 05:47:28 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:28:24.991 05:47:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:24.991 05:47:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:24.991 05:47:28 -- common/autotest_common.sh@10 -- # set +x 00:28:24.991 ************************************ 00:28:24.991 START TEST spdk_dd_malloc 00:28:24.991 ************************************ 00:28:24.991 05:47:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:28:24.991 * Looking for test storage... 00:28:24.991 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:28:24.991 05:47:28 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:24.991 05:47:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:24.991 05:47:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:24.991 05:47:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:24.991 05:47:28 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:24.991 05:47:28 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:24.991 05:47:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:24.991 05:47:28 -- paths/export.sh@5 -- # export PATH 00:28:24.991 05:47:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:24.991 05:47:28 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:28:24.991 05:47:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:24.991 05:47:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:24.991 05:47:28 -- common/autotest_common.sh@10 -- # set +x 00:28:24.991 ************************************ 00:28:24.991 START TEST dd_malloc_copy 00:28:24.991 ************************************ 00:28:24.991 05:47:28 -- common/autotest_common.sh@1104 -- # malloc_copy 00:28:24.991 05:47:28 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:28:24.991 05:47:28 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:28:24.991 05:47:28 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:28:24.991 05:47:28 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:28:24.991 05:47:28 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:28:24.992 05:47:28 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:28:24.992 05:47:28 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:28:24.992 05:47:28 -- dd/malloc.sh@28 -- # gen_conf 00:28:24.992 05:47:28 -- dd/common.sh@31 -- # xtrace_disable 00:28:24.992 05:47:28 -- common/autotest_common.sh@10 -- # set +x 00:28:24.992 [2024-10-07 05:47:28.967505] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:24.992 [2024-10-07 05:47:28.967673] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178918 ] 00:28:25.250 { 00:28:25.250 "subsystems": [ 00:28:25.250 { 00:28:25.250 "subsystem": "bdev", 00:28:25.250 "config": [ 00:28:25.250 { 00:28:25.250 "params": { 00:28:25.250 "block_size": 512, 00:28:25.250 "num_blocks": 1048576, 00:28:25.250 "name": "malloc0" 00:28:25.250 }, 00:28:25.250 "method": "bdev_malloc_create" 00:28:25.250 }, 00:28:25.250 { 00:28:25.250 "params": { 00:28:25.250 "block_size": 512, 00:28:25.250 "num_blocks": 1048576, 00:28:25.250 "name": "malloc1" 00:28:25.250 }, 00:28:25.250 "method": "bdev_malloc_create" 00:28:25.250 }, 00:28:25.250 { 00:28:25.250 "method": "bdev_wait_for_examine" 00:28:25.250 } 00:28:25.250 ] 00:28:25.250 } 00:28:25.250 ] 00:28:25.251 } 00:28:25.251 [2024-10-07 05:47:29.127809] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.509 [2024-10-07 05:47:29.376808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.532  Copying: 220/512 [MB] (220 MBps) Copying: 440/512 [MB] (219 MBps) Copying: 512/512 [MB] (average 220 MBps) 00:28:32.532 00:28:32.532 05:47:36 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:28:32.532 05:47:36 -- dd/malloc.sh@33 -- # gen_conf 00:28:32.532 05:47:36 -- dd/common.sh@31 -- # xtrace_disable 00:28:32.532 05:47:36 -- common/autotest_common.sh@10 -- # set +x 00:28:32.532 [2024-10-07 05:47:36.324907] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:32.532 [2024-10-07 05:47:36.325122] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179013 ] 00:28:32.532 { 00:28:32.532 "subsystems": [ 00:28:32.532 { 00:28:32.532 "subsystem": "bdev", 00:28:32.532 "config": [ 00:28:32.532 { 00:28:32.532 "params": { 00:28:32.532 "block_size": 512, 00:28:32.532 "num_blocks": 1048576, 00:28:32.532 "name": "malloc0" 00:28:32.532 }, 00:28:32.532 "method": "bdev_malloc_create" 00:28:32.532 }, 00:28:32.532 { 00:28:32.532 "params": { 00:28:32.532 "block_size": 512, 00:28:32.532 "num_blocks": 1048576, 00:28:32.532 "name": "malloc1" 00:28:32.532 }, 00:28:32.532 "method": "bdev_malloc_create" 00:28:32.532 }, 00:28:32.532 { 00:28:32.532 "method": "bdev_wait_for_examine" 00:28:32.532 } 00:28:32.532 ] 00:28:32.532 } 00:28:32.532 ] 00:28:32.532 } 00:28:32.532 [2024-10-07 05:47:36.491360] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.791 [2024-10-07 05:47:36.682750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.810  Copying: 220/512 [MB] (220 MBps) Copying: 440/512 [MB] (220 MBps) Copying: 512/512 [MB] (average 220 MBps) 00:28:39.810 00:28:39.810 00:28:39.810 real 0m14.629s 00:28:39.810 user 0m13.100s 00:28:39.810 sys 0m1.398s 00:28:39.810 05:47:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:39.810 05:47:43 -- common/autotest_common.sh@10 -- # set +x 00:28:39.810 ************************************ 00:28:39.810 END TEST dd_malloc_copy 00:28:39.810 ************************************ 00:28:39.810 00:28:39.810 real 0m14.771s 00:28:39.810 user 0m13.192s 00:28:39.810 sys 0m1.451s 00:28:39.810 05:47:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:39.810 ************************************ 00:28:39.810 END TEST spdk_dd_malloc 00:28:39.810 ************************************ 00:28:39.810 05:47:43 -- common/autotest_common.sh@10 -- # set +x 00:28:39.810 05:47:43 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:28:39.810 05:47:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:39.810 05:47:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:39.810 05:47:43 -- common/autotest_common.sh@10 -- # set +x 00:28:39.810 ************************************ 00:28:39.810 START TEST spdk_dd_bdev_to_bdev 00:28:39.810 ************************************ 00:28:39.810 05:47:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:28:39.810 * Looking for test storage... 00:28:39.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:28:39.810 05:47:43 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:39.810 05:47:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:39.810 05:47:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:39.810 05:47:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:39.810 05:47:43 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:39.810 05:47:43 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:39.810 05:47:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:39.810 05:47:43 -- paths/export.sh@5 -- # export PATH 00:28:39.810 05:47:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:39.810 05:47:43 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:28:39.810 05:47:43 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:28:39.810 05:47:43 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:28:39.810 05:47:43 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:28:39.810 05:47:43 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:28:39.810 05:47:43 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:28:39.810 05:47:43 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:06.0 00:28:39.810 05:47:43 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:28:39.810 05:47:43 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:28:39.810 05:47:43 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:28:39.810 05:47:43 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:28:39.810 05:47:43 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096') 00:28:39.810 05:47:43 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:28:39.810 05:47:43 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:28:39.810 [2024-10-07 05:47:43.784434] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:39.810 [2024-10-07 05:47:43.784638] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179159 ] 00:28:40.093 [2024-10-07 05:47:43.960300] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.370 [2024-10-07 05:47:44.157378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.875  Copying: 256/256 [MB] (average 1142 MBps) 00:28:41.875 00:28:41.875 05:47:45 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:41.875 05:47:45 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:41.875 05:47:45 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:28:41.875 05:47:45 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:28:41.875 05:47:45 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:28:41.875 05:47:45 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:28:41.875 05:47:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:41.875 05:47:45 -- common/autotest_common.sh@10 -- # set +x 00:28:41.875 ************************************ 00:28:41.875 START TEST dd_inflate_file 00:28:41.875 ************************************ 00:28:41.875 05:47:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:28:41.875 [2024-10-07 05:47:45.787073] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:41.875 [2024-10-07 05:47:45.787261] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179191 ] 00:28:42.134 [2024-10-07 05:47:45.949146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.393 [2024-10-07 05:47:46.134846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.589  Copying: 64/64 [MB] (average 1163 MBps) 00:28:43.589 00:28:43.589 00:28:43.589 real 0m1.808s 00:28:43.589 user 0m1.342s 00:28:43.589 sys 0m0.329s 00:28:43.589 05:47:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:43.589 ************************************ 00:28:43.589 END TEST dd_inflate_file 00:28:43.589 ************************************ 00:28:43.589 05:47:47 -- common/autotest_common.sh@10 -- # set +x 00:28:43.848 05:47:47 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:28:43.848 05:47:47 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:28:43.848 05:47:47 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:28:43.848 05:47:47 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:28:43.848 05:47:47 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:28:43.848 05:47:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:43.848 05:47:47 -- dd/common.sh@31 -- # xtrace_disable 00:28:43.848 05:47:47 -- common/autotest_common.sh@10 -- # set +x 00:28:43.848 05:47:47 -- common/autotest_common.sh@10 -- # set +x 00:28:43.848 ************************************ 00:28:43.848 START TEST dd_copy_to_out_bdev 00:28:43.848 ************************************ 00:28:43.848 05:47:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:28:43.848 { 00:28:43.848 "subsystems": [ 00:28:43.848 { 00:28:43.848 "subsystem": "bdev", 00:28:43.848 "config": [ 00:28:43.848 { 00:28:43.848 "params": { 00:28:43.848 "block_size": 4096, 00:28:43.848 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:43.848 "name": "aio1" 00:28:43.848 }, 00:28:43.848 "method": "bdev_aio_create" 00:28:43.848 }, 00:28:43.848 { 00:28:43.848 "params": { 00:28:43.848 "trtype": "pcie", 00:28:43.848 "traddr": "0000:00:06.0", 00:28:43.848 "name": "Nvme0" 00:28:43.848 }, 00:28:43.848 "method": "bdev_nvme_attach_controller" 00:28:43.848 }, 00:28:43.848 { 00:28:43.848 "method": "bdev_wait_for_examine" 00:28:43.848 } 00:28:43.848 ] 00:28:43.848 } 00:28:43.848 ] 00:28:43.848 } 00:28:43.848 [2024-10-07 05:47:47.665508] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:43.848 [2024-10-07 05:47:47.665732] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179250 ] 00:28:44.107 [2024-10-07 05:47:47.836727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.107 [2024-10-07 05:47:48.030160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.427  Copying: 43/64 [MB] (43 MBps) Copying: 64/64 [MB] (average 43 MBps) 00:28:47.427 00:28:47.427 00:28:47.427 real 0m3.395s 00:28:47.427 user 0m2.958s 00:28:47.427 sys 0m0.342s 00:28:47.427 05:47:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:47.427 ************************************ 00:28:47.427 END TEST dd_copy_to_out_bdev 00:28:47.427 ************************************ 00:28:47.427 05:47:50 -- common/autotest_common.sh@10 -- # set +x 00:28:47.427 05:47:51 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:28:47.427 05:47:51 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:28:47.427 05:47:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:47.427 05:47:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:47.427 05:47:51 -- common/autotest_common.sh@10 -- # set +x 00:28:47.427 ************************************ 00:28:47.427 START TEST dd_offset_magic 00:28:47.427 ************************************ 00:28:47.427 05:47:51 -- common/autotest_common.sh@1104 -- # offset_magic 00:28:47.427 05:47:51 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:28:47.427 05:47:51 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:28:47.427 05:47:51 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:28:47.427 05:47:51 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:28:47.427 05:47:51 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:28:47.427 05:47:51 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:28:47.427 05:47:51 -- dd/common.sh@31 -- # xtrace_disable 00:28:47.427 05:47:51 -- common/autotest_common.sh@10 -- # set +x 00:28:47.427 { 00:28:47.427 "subsystems": [ 00:28:47.427 { 00:28:47.427 "subsystem": "bdev", 00:28:47.427 "config": [ 00:28:47.427 { 00:28:47.427 "params": { 00:28:47.427 "block_size": 4096, 00:28:47.427 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:47.427 "name": "aio1" 00:28:47.427 }, 00:28:47.427 "method": "bdev_aio_create" 00:28:47.427 }, 00:28:47.427 { 00:28:47.427 "params": { 00:28:47.427 "trtype": "pcie", 00:28:47.427 "traddr": "0000:00:06.0", 00:28:47.427 "name": "Nvme0" 00:28:47.427 }, 00:28:47.427 "method": "bdev_nvme_attach_controller" 00:28:47.427 }, 00:28:47.427 { 00:28:47.427 "method": "bdev_wait_for_examine" 00:28:47.427 } 00:28:47.427 ] 00:28:47.427 } 00:28:47.427 ] 00:28:47.427 } 00:28:47.427 [2024-10-07 05:47:51.114476] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:47.427 [2024-10-07 05:47:51.115324] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179313 ] 00:28:47.427 [2024-10-07 05:47:51.283983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.686 [2024-10-07 05:47:51.479118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.000  Copying: 65/65 [MB] (average 125 MBps) 00:28:50.000 00:28:50.000 05:47:53 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:28:50.000 05:47:53 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:28:50.000 05:47:53 -- dd/common.sh@31 -- # xtrace_disable 00:28:50.000 05:47:53 -- common/autotest_common.sh@10 -- # set +x 00:28:50.000 { 00:28:50.000 "subsystems": [ 00:28:50.000 { 00:28:50.000 "subsystem": "bdev", 00:28:50.000 "config": [ 00:28:50.000 { 00:28:50.000 "params": { 00:28:50.000 "block_size": 4096, 00:28:50.000 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:50.000 "name": "aio1" 00:28:50.000 }, 00:28:50.000 "method": "bdev_aio_create" 00:28:50.000 }, 00:28:50.000 { 00:28:50.000 "params": { 00:28:50.000 "trtype": "pcie", 00:28:50.000 "traddr": "0000:00:06.0", 00:28:50.000 "name": "Nvme0" 00:28:50.000 }, 00:28:50.000 "method": "bdev_nvme_attach_controller" 00:28:50.000 }, 00:28:50.000 { 00:28:50.000 "method": "bdev_wait_for_examine" 00:28:50.000 } 00:28:50.000 ] 00:28:50.000 } 00:28:50.000 ] 00:28:50.000 } 00:28:50.000 [2024-10-07 05:47:53.711177] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:50.000 [2024-10-07 05:47:53.711383] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179358 ] 00:28:50.000 [2024-10-07 05:47:53.877199] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.259 [2024-10-07 05:47:54.071187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.897  Copying: 1024/1024 [kB] (average 500 MBps) 00:28:51.897 00:28:51.897 05:47:55 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:28:51.897 05:47:55 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:28:51.897 05:47:55 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:28:51.897 05:47:55 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:28:51.897 05:47:55 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:28:51.897 05:47:55 -- dd/common.sh@31 -- # xtrace_disable 00:28:51.897 05:47:55 -- common/autotest_common.sh@10 -- # set +x 00:28:51.897 [2024-10-07 05:47:55.692885] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:51.897 [2024-10-07 05:47:55.693085] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179389 ] 00:28:51.897 { 00:28:51.897 "subsystems": [ 00:28:51.897 { 00:28:51.897 "subsystem": "bdev", 00:28:51.897 "config": [ 00:28:51.897 { 00:28:51.897 "params": { 00:28:51.897 "block_size": 4096, 00:28:51.897 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:51.897 "name": "aio1" 00:28:51.897 }, 00:28:51.897 "method": "bdev_aio_create" 00:28:51.897 }, 00:28:51.897 { 00:28:51.897 "params": { 00:28:51.897 "trtype": "pcie", 00:28:51.897 "traddr": "0000:00:06.0", 00:28:51.897 "name": "Nvme0" 00:28:51.897 }, 00:28:51.897 "method": "bdev_nvme_attach_controller" 00:28:51.897 }, 00:28:51.897 { 00:28:51.897 "method": "bdev_wait_for_examine" 00:28:51.897 } 00:28:51.897 ] 00:28:51.897 } 00:28:51.897 ] 00:28:51.897 } 00:28:51.897 [2024-10-07 05:47:55.860688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.156 [2024-10-07 05:47:56.067842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.029  Copying: 65/65 [MB] (average 119 MBps) 00:28:54.029 00:28:54.029 05:47:58 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:28:54.029 05:47:58 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:28:54.029 05:47:58 -- dd/common.sh@31 -- # xtrace_disable 00:28:54.029 05:47:58 -- common/autotest_common.sh@10 -- # set +x 00:28:54.288 [2024-10-07 05:47:58.070556] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:54.288 [2024-10-07 05:47:58.070739] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179424 ] 00:28:54.288 { 00:28:54.288 "subsystems": [ 00:28:54.288 { 00:28:54.288 "subsystem": "bdev", 00:28:54.288 "config": [ 00:28:54.288 { 00:28:54.288 "params": { 00:28:54.288 "block_size": 4096, 00:28:54.288 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:54.288 "name": "aio1" 00:28:54.288 }, 00:28:54.288 "method": "bdev_aio_create" 00:28:54.288 }, 00:28:54.288 { 00:28:54.288 "params": { 00:28:54.288 "trtype": "pcie", 00:28:54.288 "traddr": "0000:00:06.0", 00:28:54.288 "name": "Nvme0" 00:28:54.288 }, 00:28:54.288 "method": "bdev_nvme_attach_controller" 00:28:54.288 }, 00:28:54.288 { 00:28:54.288 "method": "bdev_wait_for_examine" 00:28:54.288 } 00:28:54.288 ] 00:28:54.288 } 00:28:54.288 ] 00:28:54.288 } 00:28:54.288 [2024-10-07 05:47:58.236150] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.551 [2024-10-07 05:47:58.449793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.056  Copying: 1024/1024 [kB] (average 500 MBps) 00:28:56.056 00:28:56.056 05:47:59 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:28:56.056 05:47:59 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:28:56.056 00:28:56.056 real 0m8.909s 00:28:56.056 user 0m6.339s 00:28:56.056 sys 0m1.349s 00:28:56.056 05:47:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:56.056 ************************************ 00:28:56.056 END TEST dd_offset_magic 00:28:56.056 ************************************ 00:28:56.056 05:47:59 -- common/autotest_common.sh@10 -- # set +x 00:28:56.056 05:47:59 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:28:56.056 05:47:59 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:28:56.056 05:47:59 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:28:56.056 05:47:59 -- dd/common.sh@11 -- # local nvme_ref= 00:28:56.056 05:47:59 -- dd/common.sh@12 -- # local size=4194330 00:28:56.056 05:47:59 -- dd/common.sh@14 -- # local bs=1048576 00:28:56.056 05:47:59 -- dd/common.sh@15 -- # local count=5 00:28:56.056 05:47:59 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:28:56.056 05:47:59 -- dd/common.sh@18 -- # gen_conf 00:28:56.056 05:47:59 -- dd/common.sh@31 -- # xtrace_disable 00:28:56.056 05:47:59 -- common/autotest_common.sh@10 -- # set +x 00:28:56.315 { 00:28:56.315 "subsystems": [ 00:28:56.315 { 00:28:56.315 "subsystem": "bdev", 00:28:56.315 "config": [ 00:28:56.315 { 00:28:56.315 "params": { 00:28:56.315 "block_size": 4096, 00:28:56.315 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:56.315 "name": "aio1" 00:28:56.315 }, 00:28:56.315 "method": "bdev_aio_create" 00:28:56.315 }, 00:28:56.315 { 00:28:56.315 "params": { 00:28:56.315 "trtype": "pcie", 00:28:56.315 "traddr": "0000:00:06.0", 00:28:56.315 "name": "Nvme0" 00:28:56.315 }, 00:28:56.315 "method": "bdev_nvme_attach_controller" 00:28:56.315 }, 00:28:56.315 { 00:28:56.315 "method": "bdev_wait_for_examine" 00:28:56.315 } 00:28:56.315 ] 00:28:56.315 } 00:28:56.315 ] 00:28:56.315 } 00:28:56.315 [2024-10-07 05:48:00.065265] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:56.315 [2024-10-07 05:48:00.065497] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179473 ] 00:28:56.315 [2024-10-07 05:48:00.235341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.574 [2024-10-07 05:48:00.437892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.078  Copying: 5120/5120 [kB] (average 1250 MBps) 00:28:58.078 00:28:58.078 05:48:01 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:28:58.078 05:48:01 -- dd/common.sh@10 -- # local bdev=aio1 00:28:58.078 05:48:01 -- dd/common.sh@11 -- # local nvme_ref= 00:28:58.078 05:48:01 -- dd/common.sh@12 -- # local size=4194330 00:28:58.078 05:48:01 -- dd/common.sh@14 -- # local bs=1048576 00:28:58.078 05:48:01 -- dd/common.sh@15 -- # local count=5 00:28:58.078 05:48:01 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:28:58.078 05:48:01 -- dd/common.sh@18 -- # gen_conf 00:28:58.078 05:48:01 -- dd/common.sh@31 -- # xtrace_disable 00:28:58.078 05:48:01 -- common/autotest_common.sh@10 -- # set +x 00:28:58.078 { 00:28:58.078 "subsystems": [ 00:28:58.078 { 00:28:58.078 "subsystem": "bdev", 00:28:58.078 "config": [ 00:28:58.078 { 00:28:58.078 "params": { 00:28:58.078 "block_size": 4096, 00:28:58.078 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:58.078 "name": "aio1" 00:28:58.078 }, 00:28:58.078 "method": "bdev_aio_create" 00:28:58.078 }, 00:28:58.078 { 00:28:58.078 "params": { 00:28:58.078 "trtype": "pcie", 00:28:58.078 "traddr": "0000:00:06.0", 00:28:58.078 "name": "Nvme0" 00:28:58.078 }, 00:28:58.078 "method": "bdev_nvme_attach_controller" 00:28:58.078 }, 00:28:58.078 { 00:28:58.078 "method": "bdev_wait_for_examine" 00:28:58.078 } 00:28:58.078 ] 00:28:58.078 } 00:28:58.078 ] 00:28:58.078 } 00:28:58.078 [2024-10-07 05:48:01.974876] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:28:58.078 [2024-10-07 05:48:01.975102] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179502 ] 00:28:58.338 [2024-10-07 05:48:02.147120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.596 [2024-10-07 05:48:02.369510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.258  Copying: 5120/5120 [kB] (average 1250 MBps) 00:29:00.258 00:29:00.258 05:48:03 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:29:00.258 00:29:00.258 real 0m20.294s 00:29:00.258 user 0m15.177s 00:29:00.258 sys 0m3.282s 00:29:00.258 05:48:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:00.258 05:48:03 -- common/autotest_common.sh@10 -- # set +x 00:29:00.258 ************************************ 00:29:00.258 END TEST spdk_dd_bdev_to_bdev 00:29:00.258 ************************************ 00:29:00.258 05:48:03 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:29:00.258 05:48:03 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:29:00.258 05:48:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:00.258 05:48:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:00.258 05:48:03 -- common/autotest_common.sh@10 -- # set +x 00:29:00.258 ************************************ 00:29:00.258 START TEST spdk_dd_sparse 00:29:00.258 ************************************ 00:29:00.258 05:48:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:29:00.258 * Looking for test storage... 00:29:00.258 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:29:00.258 05:48:04 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:00.258 05:48:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.258 05:48:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.258 05:48:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.258 05:48:04 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:00.258 05:48:04 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:00.258 05:48:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:00.258 05:48:04 -- paths/export.sh@5 -- # export PATH 00:29:00.258 05:48:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:00.258 05:48:04 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:29:00.258 05:48:04 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:29:00.258 05:48:04 -- dd/sparse.sh@110 -- # file1=file_zero1 00:29:00.258 05:48:04 -- dd/sparse.sh@111 -- # file2=file_zero2 00:29:00.258 05:48:04 -- dd/sparse.sh@112 -- # file3=file_zero3 00:29:00.258 05:48:04 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:29:00.258 05:48:04 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:29:00.258 05:48:04 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:29:00.258 05:48:04 -- dd/sparse.sh@118 -- # prepare 00:29:00.258 05:48:04 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:29:00.258 05:48:04 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:29:00.258 1+0 records in 00:29:00.258 1+0 records out 00:29:00.258 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0118061 s, 355 MB/s 00:29:00.258 05:48:04 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:29:00.258 1+0 records in 00:29:00.258 1+0 records out 00:29:00.258 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00961057 s, 436 MB/s 00:29:00.258 05:48:04 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:29:00.258 1+0 records in 00:29:00.258 1+0 records out 00:29:00.258 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0100127 s, 419 MB/s 00:29:00.258 05:48:04 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:29:00.258 05:48:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:00.258 05:48:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:00.258 05:48:04 -- common/autotest_common.sh@10 -- # set +x 00:29:00.258 ************************************ 00:29:00.258 START TEST dd_sparse_file_to_file 00:29:00.258 ************************************ 00:29:00.258 05:48:04 -- common/autotest_common.sh@1104 -- # file_to_file 00:29:00.258 05:48:04 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:29:00.258 05:48:04 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:29:00.258 05:48:04 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:29:00.258 05:48:04 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:29:00.258 05:48:04 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:29:00.258 05:48:04 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:29:00.258 05:48:04 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:29:00.258 05:48:04 -- dd/sparse.sh@41 -- # gen_conf 00:29:00.258 05:48:04 -- dd/common.sh@31 -- # xtrace_disable 00:29:00.258 05:48:04 -- common/autotest_common.sh@10 -- # set +x 00:29:00.258 { 00:29:00.258 "subsystems": [ 00:29:00.258 { 00:29:00.258 "subsystem": "bdev", 00:29:00.258 "config": [ 00:29:00.258 { 00:29:00.258 "params": { 00:29:00.258 "block_size": 4096, 00:29:00.258 "filename": "dd_sparse_aio_disk", 00:29:00.258 "name": "dd_aio" 00:29:00.258 }, 00:29:00.258 "method": "bdev_aio_create" 00:29:00.258 }, 00:29:00.258 { 00:29:00.258 "params": { 00:29:00.258 "lvs_name": "dd_lvstore", 00:29:00.258 "bdev_name": "dd_aio" 00:29:00.258 }, 00:29:00.258 "method": "bdev_lvol_create_lvstore" 00:29:00.258 }, 00:29:00.258 { 00:29:00.258 "method": "bdev_wait_for_examine" 00:29:00.258 } 00:29:00.258 ] 00:29:00.258 } 00:29:00.258 ] 00:29:00.258 } 00:29:00.258 [2024-10-07 05:48:04.201782] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:00.258 [2024-10-07 05:48:04.202565] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179588 ] 00:29:00.517 [2024-10-07 05:48:04.368405] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.777 [2024-10-07 05:48:04.559760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.415  Copying: 12/36 [MB] (average 857 MBps) 00:29:02.415 00:29:02.415 05:48:06 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:29:02.415 05:48:06 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:29:02.415 05:48:06 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:29:02.415 05:48:06 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:29:02.415 05:48:06 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:29:02.415 05:48:06 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:29:02.415 05:48:06 -- dd/sparse.sh@52 -- # stat1_b=24576 00:29:02.415 05:48:06 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:29:02.415 05:48:06 -- dd/sparse.sh@53 -- # stat2_b=24576 00:29:02.415 05:48:06 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:29:02.415 00:29:02.415 real 0m1.985s 00:29:02.415 user 0m1.543s 00:29:02.415 sys 0m0.313s 00:29:02.415 05:48:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:02.415 05:48:06 -- common/autotest_common.sh@10 -- # set +x 00:29:02.415 ************************************ 00:29:02.415 END TEST dd_sparse_file_to_file 00:29:02.415 ************************************ 00:29:02.415 05:48:06 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:29:02.415 05:48:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:02.415 05:48:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:02.415 05:48:06 -- common/autotest_common.sh@10 -- # set +x 00:29:02.415 ************************************ 00:29:02.415 START TEST dd_sparse_file_to_bdev 00:29:02.415 ************************************ 00:29:02.415 05:48:06 -- common/autotest_common.sh@1104 -- # file_to_bdev 00:29:02.415 05:48:06 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:29:02.415 05:48:06 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:29:02.415 05:48:06 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:29:02.415 05:48:06 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:29:02.415 05:48:06 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:29:02.415 05:48:06 -- dd/sparse.sh@73 -- # gen_conf 00:29:02.415 05:48:06 -- dd/common.sh@31 -- # xtrace_disable 00:29:02.415 05:48:06 -- common/autotest_common.sh@10 -- # set +x 00:29:02.415 { 00:29:02.415 "subsystems": [ 00:29:02.415 { 00:29:02.415 "subsystem": "bdev", 00:29:02.415 "config": [ 00:29:02.415 { 00:29:02.415 "params": { 00:29:02.415 "block_size": 4096, 00:29:02.415 "filename": "dd_sparse_aio_disk", 00:29:02.415 "name": "dd_aio" 00:29:02.415 }, 00:29:02.415 "method": "bdev_aio_create" 00:29:02.415 }, 00:29:02.415 { 00:29:02.415 "params": { 00:29:02.415 "lvs_name": "dd_lvstore", 00:29:02.415 "lvol_name": "dd_lvol", 00:29:02.415 "size": 37748736, 00:29:02.415 "thin_provision": true 00:29:02.415 }, 00:29:02.415 "method": "bdev_lvol_create" 00:29:02.415 }, 00:29:02.415 { 00:29:02.415 "method": "bdev_wait_for_examine" 00:29:02.415 } 00:29:02.415 ] 00:29:02.415 } 00:29:02.415 ] 00:29:02.415 } 00:29:02.416 [2024-10-07 05:48:06.241101] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:02.416 [2024-10-07 05:48:06.241791] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179656 ] 00:29:02.674 [2024-10-07 05:48:06.408472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.675 [2024-10-07 05:48:06.594933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.934 [2024-10-07 05:48:06.891629] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:29:03.193  Copying: 12/36 [MB] (average 480 MBps)[2024-10-07 05:48:06.954638] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:29:04.129 00:29:04.129 00:29:04.129 00:29:04.129 real 0m1.922s 00:29:04.129 user 0m1.532s 00:29:04.129 sys 0m0.296s 00:29:04.129 05:48:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:04.129 ************************************ 00:29:04.129 END TEST dd_sparse_file_to_bdev 00:29:04.129 05:48:08 -- common/autotest_common.sh@10 -- # set +x 00:29:04.129 ************************************ 00:29:04.389 05:48:08 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:29:04.389 05:48:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:04.389 05:48:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:04.389 05:48:08 -- common/autotest_common.sh@10 -- # set +x 00:29:04.389 ************************************ 00:29:04.389 START TEST dd_sparse_bdev_to_file 00:29:04.389 ************************************ 00:29:04.389 05:48:08 -- common/autotest_common.sh@1104 -- # bdev_to_file 00:29:04.389 05:48:08 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:29:04.389 05:48:08 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:29:04.389 05:48:08 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:29:04.389 05:48:08 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:29:04.389 05:48:08 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:29:04.389 05:48:08 -- dd/sparse.sh@91 -- # gen_conf 00:29:04.389 05:48:08 -- dd/common.sh@31 -- # xtrace_disable 00:29:04.389 05:48:08 -- common/autotest_common.sh@10 -- # set +x 00:29:04.389 { 00:29:04.389 "subsystems": [ 00:29:04.389 { 00:29:04.389 "subsystem": "bdev", 00:29:04.389 "config": [ 00:29:04.389 { 00:29:04.389 "params": { 00:29:04.389 "block_size": 4096, 00:29:04.389 "filename": "dd_sparse_aio_disk", 00:29:04.389 "name": "dd_aio" 00:29:04.389 }, 00:29:04.389 "method": "bdev_aio_create" 00:29:04.389 }, 00:29:04.389 { 00:29:04.389 "method": "bdev_wait_for_examine" 00:29:04.389 } 00:29:04.389 ] 00:29:04.389 } 00:29:04.389 ] 00:29:04.389 } 00:29:04.389 [2024-10-07 05:48:08.224807] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:04.389 [2024-10-07 05:48:08.225020] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179707 ] 00:29:04.647 [2024-10-07 05:48:08.395766] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.647 [2024-10-07 05:48:08.602226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.151  Copying: 12/36 [MB] (average 923 MBps) 00:29:06.151 00:29:06.151 05:48:10 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:29:06.151 05:48:10 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:29:06.151 05:48:10 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:29:06.151 05:48:10 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:29:06.151 05:48:10 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:29:06.151 05:48:10 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:29:06.151 05:48:10 -- dd/sparse.sh@102 -- # stat2_b=24576 00:29:06.151 05:48:10 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:29:06.151 05:48:10 -- dd/sparse.sh@103 -- # stat3_b=24576 00:29:06.151 05:48:10 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:29:06.151 00:29:06.151 real 0m1.965s 00:29:06.151 user 0m1.545s 00:29:06.151 sys 0m0.311s 00:29:06.151 05:48:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:06.151 05:48:10 -- common/autotest_common.sh@10 -- # set +x 00:29:06.151 ************************************ 00:29:06.151 END TEST dd_sparse_bdev_to_file 00:29:06.151 ************************************ 00:29:06.411 05:48:10 -- dd/sparse.sh@1 -- # cleanup 00:29:06.411 05:48:10 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:29:06.411 05:48:10 -- dd/sparse.sh@12 -- # rm file_zero1 00:29:06.411 05:48:10 -- dd/sparse.sh@13 -- # rm file_zero2 00:29:06.411 05:48:10 -- dd/sparse.sh@14 -- # rm file_zero3 00:29:06.411 00:29:06.411 real 0m6.198s 00:29:06.411 user 0m4.784s 00:29:06.411 sys 0m1.079s 00:29:06.411 05:48:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:06.411 05:48:10 -- common/autotest_common.sh@10 -- # set +x 00:29:06.411 ************************************ 00:29:06.411 END TEST spdk_dd_sparse 00:29:06.411 ************************************ 00:29:06.411 05:48:10 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:29:06.411 05:48:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:06.411 05:48:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:06.411 05:48:10 -- common/autotest_common.sh@10 -- # set +x 00:29:06.411 ************************************ 00:29:06.411 START TEST spdk_dd_negative 00:29:06.411 ************************************ 00:29:06.411 05:48:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:29:06.411 * Looking for test storage... 00:29:06.411 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:29:06.411 05:48:10 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:06.411 05:48:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:06.411 05:48:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:06.411 05:48:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:06.411 05:48:10 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:06.411 05:48:10 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:06.411 05:48:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:06.411 05:48:10 -- paths/export.sh@5 -- # export PATH 00:29:06.411 05:48:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:06.411 05:48:10 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:06.411 05:48:10 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:06.411 05:48:10 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:06.411 05:48:10 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:06.411 05:48:10 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:29:06.411 05:48:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:06.411 05:48:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:06.411 05:48:10 -- common/autotest_common.sh@10 -- # set +x 00:29:06.411 ************************************ 00:29:06.411 START TEST dd_invalid_arguments 00:29:06.411 ************************************ 00:29:06.411 05:48:10 -- common/autotest_common.sh@1104 -- # invalid_arguments 00:29:06.411 05:48:10 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:29:06.411 05:48:10 -- common/autotest_common.sh@640 -- # local es=0 00:29:06.411 05:48:10 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:29:06.411 05:48:10 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:06.411 05:48:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:06.411 05:48:10 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:06.411 05:48:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:06.411 05:48:10 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:06.411 05:48:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:06.411 05:48:10 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:06.411 05:48:10 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:06.411 05:48:10 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:29:06.671 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:29:06.671 options: 00:29:06.671 -c, --config JSON config file (default none) 00:29:06.671 --json JSON config file (default none) 00:29:06.671 --json-ignore-init-errors 00:29:06.671 don't exit on invalid config entry 00:29:06.671 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:29:06.671 -g, --single-file-segments 00:29:06.671 force creating just one hugetlbfs file 00:29:06.671 -h, --help show this usage 00:29:06.671 -i, --shm-id shared memory ID (optional) 00:29:06.671 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:29:06.671 --lcores lcore to CPU mapping list. The list is in the format: 00:29:06.671 [<,lcores[@CPUs]>...] 00:29:06.671 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:29:06.671 Within the group, '-' is used for range separator, 00:29:06.671 ',' is used for single number separator. 00:29:06.671 '( )' can be omitted for single element group, 00:29:06.671 '@' can be omitted if cpus and lcores have the same value 00:29:06.671 -n, --mem-channels channel number of memory channels used for DPDK 00:29:06.671 -p, --main-core main (primary) core for DPDK 00:29:06.671 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:29:06.671 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:29:06.671 --disable-cpumask-locks Disable CPU core lock files. 00:29:06.671 --silence-noticelog disable notice level logging to stderr 00:29:06.671 --msg-mempool-size global message memory pool size in count (default: 262143) 00:29:06.671 -u, --no-pci disable PCI access 00:29:06.671 --wait-for-rpc wait for RPCs to initialize subsystems 00:29:06.671 --max-delay maximum reactor delay (in microseconds) 00:29:06.671 -B, --pci-blocked pci addr to block (can be used more than once) 00:29:06.671 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:29:06.671 -R, --huge-unlink unlink huge files after initialization 00:29:06.671 -v, --version print SPDK version 00:29:06.671 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:29:06.671 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:29:06.671 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:29:06.671 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:29:06.671 Tracepoints vary in size and can use more than one trace entry. 00:29:06.671 --rpcs-allowed comma-separated list of permitted RPCS 00:29:06.671 --env-context Opaque context for use of the env implementation 00:29:06.671 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:29:06.671 --no-huge run without using hugepages 00:29:06.671 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:29:06.671 -e, --tpoint-group [:] 00:29:06.671 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:29:06.671 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:29:06.671 Groups and /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:29:06.671 [2024-10-07 05:48:10.429380] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:29:06.671 masks can be combined (e.g. thread,bdev:0x1). 00:29:06.671 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:29:06.671 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:29:06.671 [--------- DD Options ---------] 00:29:06.671 --if Input file. Must specify either --if or --ib. 00:29:06.671 --ib Input bdev. Must specifier either --if or --ib 00:29:06.671 --of Output file. Must specify either --of or --ob. 00:29:06.671 --ob Output bdev. Must specify either --of or --ob. 00:29:06.671 --iflag Input file flags. 00:29:06.671 --oflag Output file flags. 00:29:06.671 --bs I/O unit size (default: 4096) 00:29:06.671 --qd Queue depth (default: 2) 00:29:06.671 --count I/O unit count. The number of I/O units to copy. (default: all) 00:29:06.671 --skip Skip this many I/O units at start of input. (default: 0) 00:29:06.671 --seek Skip this many I/O units at start of output. (default: 0) 00:29:06.671 --aio Force usage of AIO. (by default io_uring is used if available) 00:29:06.671 --sparse Enable hole skipping in input target 00:29:06.671 Available iflag and oflag values: 00:29:06.671 append - append mode 00:29:06.671 direct - use direct I/O for data 00:29:06.671 directory - fail unless a directory 00:29:06.671 dsync - use synchronized I/O for data 00:29:06.671 noatime - do not update access time 00:29:06.671 noctty - do not assign controlling terminal from file 00:29:06.671 nofollow - do not follow symlinks 00:29:06.671 nonblock - use non-blocking I/O 00:29:06.671 sync - use synchronized I/O for data and metadata 00:29:06.671 05:48:10 -- common/autotest_common.sh@643 -- # es=2 00:29:06.671 05:48:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:06.671 05:48:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:06.671 05:48:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:06.671 00:29:06.671 real 0m0.121s 00:29:06.671 user 0m0.059s 00:29:06.671 sys 0m0.063s 00:29:06.671 05:48:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:06.671 05:48:10 -- common/autotest_common.sh@10 -- # set +x 00:29:06.671 ************************************ 00:29:06.671 END TEST dd_invalid_arguments 00:29:06.671 ************************************ 00:29:06.671 05:48:10 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:29:06.671 05:48:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:06.672 05:48:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:06.672 05:48:10 -- common/autotest_common.sh@10 -- # set +x 00:29:06.672 ************************************ 00:29:06.672 START TEST dd_double_input 00:29:06.672 ************************************ 00:29:06.672 05:48:10 -- common/autotest_common.sh@1104 -- # double_input 00:29:06.672 05:48:10 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:29:06.672 05:48:10 -- common/autotest_common.sh@640 -- # local es=0 00:29:06.672 05:48:10 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:29:06.672 05:48:10 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:06.672 05:48:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:06.672 05:48:10 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:06.672 05:48:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:06.672 05:48:10 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:06.672 05:48:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:06.672 05:48:10 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:06.672 05:48:10 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:06.672 05:48:10 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:29:06.672 [2024-10-07 05:48:10.599827] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:29:06.672 05:48:10 -- common/autotest_common.sh@643 -- # es=22 00:29:06.672 05:48:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:06.672 05:48:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:06.672 05:48:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:06.672 00:29:06.672 real 0m0.117s 00:29:06.672 user 0m0.051s 00:29:06.672 sys 0m0.067s 00:29:06.672 05:48:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:06.672 ************************************ 00:29:06.672 END TEST dd_double_input 00:29:06.672 05:48:10 -- common/autotest_common.sh@10 -- # set +x 00:29:06.672 ************************************ 00:29:06.931 05:48:10 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:29:06.931 05:48:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:06.931 05:48:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:06.931 05:48:10 -- common/autotest_common.sh@10 -- # set +x 00:29:06.931 ************************************ 00:29:06.931 START TEST dd_double_output 00:29:06.931 ************************************ 00:29:06.931 05:48:10 -- common/autotest_common.sh@1104 -- # double_output 00:29:06.931 05:48:10 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:29:06.931 05:48:10 -- common/autotest_common.sh@640 -- # local es=0 00:29:06.931 05:48:10 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:29:06.931 05:48:10 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:06.931 05:48:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:06.931 05:48:10 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:06.931 05:48:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:06.931 05:48:10 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:06.931 05:48:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:06.931 05:48:10 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:06.931 05:48:10 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:06.931 05:48:10 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:29:06.931 [2024-10-07 05:48:10.783076] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:29:06.931 05:48:10 -- common/autotest_common.sh@643 -- # es=22 00:29:06.931 05:48:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:06.931 05:48:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:06.931 05:48:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:06.931 00:29:06.931 real 0m0.124s 00:29:06.931 user 0m0.062s 00:29:06.931 sys 0m0.062s 00:29:06.931 05:48:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:06.931 ************************************ 00:29:06.931 END TEST dd_double_output 00:29:06.931 ************************************ 00:29:06.931 05:48:10 -- common/autotest_common.sh@10 -- # set +x 00:29:06.931 05:48:10 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:29:06.931 05:48:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:06.931 05:48:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:06.931 05:48:10 -- common/autotest_common.sh@10 -- # set +x 00:29:06.931 ************************************ 00:29:06.932 START TEST dd_no_input 00:29:06.932 ************************************ 00:29:06.932 05:48:10 -- common/autotest_common.sh@1104 -- # no_input 00:29:06.932 05:48:10 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:29:06.932 05:48:10 -- common/autotest_common.sh@640 -- # local es=0 00:29:06.932 05:48:10 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:29:06.932 05:48:10 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:06.932 05:48:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:06.932 05:48:10 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:06.932 05:48:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:06.932 05:48:10 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:06.932 05:48:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:06.932 05:48:10 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:06.932 05:48:10 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:06.932 05:48:10 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:29:07.192 [2024-10-07 05:48:10.966319] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:29:07.192 05:48:11 -- common/autotest_common.sh@643 -- # es=22 00:29:07.192 05:48:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:07.192 05:48:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:07.192 05:48:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:07.192 00:29:07.192 real 0m0.119s 00:29:07.192 user 0m0.050s 00:29:07.192 sys 0m0.070s 00:29:07.192 05:48:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:07.192 05:48:11 -- common/autotest_common.sh@10 -- # set +x 00:29:07.192 ************************************ 00:29:07.192 END TEST dd_no_input 00:29:07.192 ************************************ 00:29:07.192 05:48:11 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:29:07.192 05:48:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:07.192 05:48:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:07.192 05:48:11 -- common/autotest_common.sh@10 -- # set +x 00:29:07.192 ************************************ 00:29:07.192 START TEST dd_no_output 00:29:07.192 ************************************ 00:29:07.192 05:48:11 -- common/autotest_common.sh@1104 -- # no_output 00:29:07.192 05:48:11 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:07.192 05:48:11 -- common/autotest_common.sh@640 -- # local es=0 00:29:07.192 05:48:11 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:07.192 05:48:11 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:07.192 05:48:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:07.192 05:48:11 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:07.192 05:48:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:07.192 05:48:11 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:07.192 05:48:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:07.192 05:48:11 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:07.192 05:48:11 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:07.192 05:48:11 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:07.192 [2024-10-07 05:48:11.147294] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:29:07.451 05:48:11 -- common/autotest_common.sh@643 -- # es=22 00:29:07.451 05:48:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:07.451 05:48:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:07.451 05:48:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:07.451 00:29:07.451 real 0m0.119s 00:29:07.451 user 0m0.070s 00:29:07.451 sys 0m0.049s 00:29:07.451 05:48:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:07.451 05:48:11 -- common/autotest_common.sh@10 -- # set +x 00:29:07.451 ************************************ 00:29:07.451 END TEST dd_no_output 00:29:07.451 ************************************ 00:29:07.451 05:48:11 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:29:07.451 05:48:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:07.451 05:48:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:07.451 05:48:11 -- common/autotest_common.sh@10 -- # set +x 00:29:07.451 ************************************ 00:29:07.451 START TEST dd_wrong_blocksize 00:29:07.451 ************************************ 00:29:07.451 05:48:11 -- common/autotest_common.sh@1104 -- # wrong_blocksize 00:29:07.451 05:48:11 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:29:07.451 05:48:11 -- common/autotest_common.sh@640 -- # local es=0 00:29:07.451 05:48:11 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:29:07.451 05:48:11 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:07.451 05:48:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:07.451 05:48:11 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:07.451 05:48:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:07.451 05:48:11 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:07.451 05:48:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:07.451 05:48:11 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:07.451 05:48:11 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:07.451 05:48:11 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:29:07.451 [2024-10-07 05:48:11.322356] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:29:07.451 05:48:11 -- common/autotest_common.sh@643 -- # es=22 00:29:07.451 05:48:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:07.451 05:48:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:07.451 05:48:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:07.451 00:29:07.451 real 0m0.117s 00:29:07.451 user 0m0.059s 00:29:07.451 sys 0m0.059s 00:29:07.451 05:48:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:07.451 05:48:11 -- common/autotest_common.sh@10 -- # set +x 00:29:07.451 ************************************ 00:29:07.451 END TEST dd_wrong_blocksize 00:29:07.451 ************************************ 00:29:07.451 05:48:11 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:29:07.451 05:48:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:07.451 05:48:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:07.451 05:48:11 -- common/autotest_common.sh@10 -- # set +x 00:29:07.451 ************************************ 00:29:07.451 START TEST dd_smaller_blocksize 00:29:07.452 ************************************ 00:29:07.452 05:48:11 -- common/autotest_common.sh@1104 -- # smaller_blocksize 00:29:07.452 05:48:11 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:29:07.452 05:48:11 -- common/autotest_common.sh@640 -- # local es=0 00:29:07.452 05:48:11 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:29:07.452 05:48:11 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:07.452 05:48:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:07.452 05:48:11 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:07.711 05:48:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:07.711 05:48:11 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:07.711 05:48:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:07.711 05:48:11 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:07.711 05:48:11 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:07.711 05:48:11 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:29:07.711 [2024-10-07 05:48:11.499357] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:07.711 [2024-10-07 05:48:11.499549] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179978 ] 00:29:07.711 [2024-10-07 05:48:11.672417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.970 [2024-10-07 05:48:11.934523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.539 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:29:08.798 [2024-10-07 05:48:12.576428] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:29:08.798 [2024-10-07 05:48:12.576535] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:09.367 [2024-10-07 05:48:13.220526] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:29:09.626 05:48:13 -- common/autotest_common.sh@643 -- # es=244 00:29:09.626 05:48:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:09.626 05:48:13 -- common/autotest_common.sh@652 -- # es=116 00:29:09.626 05:48:13 -- common/autotest_common.sh@653 -- # case "$es" in 00:29:09.626 05:48:13 -- common/autotest_common.sh@660 -- # es=1 00:29:09.626 05:48:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:09.626 00:29:09.626 real 0m2.165s 00:29:09.626 user 0m1.452s 00:29:09.626 sys 0m0.611s 00:29:09.626 05:48:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:09.626 05:48:13 -- common/autotest_common.sh@10 -- # set +x 00:29:09.626 ************************************ 00:29:09.626 END TEST dd_smaller_blocksize 00:29:09.626 ************************************ 00:29:09.886 05:48:13 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:29:09.886 05:48:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:09.886 05:48:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:09.886 05:48:13 -- common/autotest_common.sh@10 -- # set +x 00:29:09.886 ************************************ 00:29:09.886 START TEST dd_invalid_count 00:29:09.886 ************************************ 00:29:09.886 05:48:13 -- common/autotest_common.sh@1104 -- # invalid_count 00:29:09.886 05:48:13 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:29:09.886 05:48:13 -- common/autotest_common.sh@640 -- # local es=0 00:29:09.886 05:48:13 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:29:09.886 05:48:13 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:09.886 05:48:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:09.886 05:48:13 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:09.886 05:48:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:09.886 05:48:13 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:09.886 05:48:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:09.886 05:48:13 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:09.886 05:48:13 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:09.886 05:48:13 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:29:09.886 [2024-10-07 05:48:13.722802] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:29:09.886 05:48:13 -- common/autotest_common.sh@643 -- # es=22 00:29:09.886 05:48:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:09.886 05:48:13 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:09.886 05:48:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:09.886 00:29:09.886 real 0m0.115s 00:29:09.886 user 0m0.044s 00:29:09.886 sys 0m0.072s 00:29:09.886 05:48:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:09.886 05:48:13 -- common/autotest_common.sh@10 -- # set +x 00:29:09.886 ************************************ 00:29:09.886 END TEST dd_invalid_count 00:29:09.886 ************************************ 00:29:09.886 05:48:13 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:29:09.886 05:48:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:09.886 05:48:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:09.886 05:48:13 -- common/autotest_common.sh@10 -- # set +x 00:29:09.886 ************************************ 00:29:09.886 START TEST dd_invalid_oflag 00:29:09.886 ************************************ 00:29:09.886 05:48:13 -- common/autotest_common.sh@1104 -- # invalid_oflag 00:29:09.886 05:48:13 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:29:09.886 05:48:13 -- common/autotest_common.sh@640 -- # local es=0 00:29:09.886 05:48:13 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:29:09.886 05:48:13 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:09.886 05:48:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:09.886 05:48:13 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:09.886 05:48:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:09.886 05:48:13 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:09.886 05:48:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:09.886 05:48:13 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:09.886 05:48:13 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:09.886 05:48:13 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:29:10.145 [2024-10-07 05:48:13.885116] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:29:10.145 05:48:13 -- common/autotest_common.sh@643 -- # es=22 00:29:10.145 05:48:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:10.145 05:48:13 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:10.145 05:48:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:10.145 00:29:10.145 real 0m0.101s 00:29:10.145 user 0m0.048s 00:29:10.145 sys 0m0.053s 00:29:10.145 05:48:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:10.145 05:48:13 -- common/autotest_common.sh@10 -- # set +x 00:29:10.145 ************************************ 00:29:10.145 END TEST dd_invalid_oflag 00:29:10.145 ************************************ 00:29:10.145 05:48:13 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:29:10.145 05:48:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:10.145 05:48:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:10.145 05:48:13 -- common/autotest_common.sh@10 -- # set +x 00:29:10.145 ************************************ 00:29:10.145 START TEST dd_invalid_iflag 00:29:10.145 ************************************ 00:29:10.145 05:48:13 -- common/autotest_common.sh@1104 -- # invalid_iflag 00:29:10.145 05:48:13 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:29:10.145 05:48:13 -- common/autotest_common.sh@640 -- # local es=0 00:29:10.145 05:48:13 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:29:10.145 05:48:13 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:10.145 05:48:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:10.145 05:48:13 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:10.145 05:48:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:10.145 05:48:13 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:10.145 05:48:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:10.145 05:48:13 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:10.145 05:48:13 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:10.146 05:48:13 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:29:10.146 [2024-10-07 05:48:14.050741] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:29:10.146 05:48:14 -- common/autotest_common.sh@643 -- # es=22 00:29:10.146 05:48:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:10.146 05:48:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:10.146 05:48:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:10.146 00:29:10.146 real 0m0.112s 00:29:10.146 user 0m0.062s 00:29:10.146 sys 0m0.050s 00:29:10.146 05:48:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:10.146 05:48:14 -- common/autotest_common.sh@10 -- # set +x 00:29:10.146 ************************************ 00:29:10.146 END TEST dd_invalid_iflag 00:29:10.146 ************************************ 00:29:10.405 05:48:14 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:29:10.405 05:48:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:10.405 05:48:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:10.405 05:48:14 -- common/autotest_common.sh@10 -- # set +x 00:29:10.405 ************************************ 00:29:10.405 START TEST dd_unknown_flag 00:29:10.405 ************************************ 00:29:10.405 05:48:14 -- common/autotest_common.sh@1104 -- # unknown_flag 00:29:10.405 05:48:14 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:29:10.405 05:48:14 -- common/autotest_common.sh@640 -- # local es=0 00:29:10.405 05:48:14 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:29:10.405 05:48:14 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:10.405 05:48:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:10.405 05:48:14 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:10.405 05:48:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:10.405 05:48:14 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:10.405 05:48:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:10.405 05:48:14 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:10.405 05:48:14 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:10.406 05:48:14 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:29:10.406 [2024-10-07 05:48:14.224452] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:10.406 [2024-10-07 05:48:14.224612] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180110 ] 00:29:10.406 [2024-10-07 05:48:14.380020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.665 [2024-10-07 05:48:14.574224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.930 [2024-10-07 05:48:14.860497] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:29:10.930 [2024-10-07 05:48:14.860608] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:29:10.930 [2024-10-07 05:48:14.860642] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:29:10.930 [2024-10-07 05:48:14.860711] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:11.867 [2024-10-07 05:48:15.498173] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:29:12.126 05:48:15 -- common/autotest_common.sh@643 -- # es=236 00:29:12.126 05:48:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:12.126 05:48:15 -- common/autotest_common.sh@652 -- # es=108 00:29:12.126 05:48:15 -- common/autotest_common.sh@653 -- # case "$es" in 00:29:12.126 05:48:15 -- common/autotest_common.sh@660 -- # es=1 00:29:12.126 05:48:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:12.126 00:29:12.126 real 0m1.711s 00:29:12.126 user 0m1.316s 00:29:12.126 sys 0m0.294s 00:29:12.126 05:48:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:12.126 05:48:15 -- common/autotest_common.sh@10 -- # set +x 00:29:12.126 ************************************ 00:29:12.126 END TEST dd_unknown_flag 00:29:12.126 ************************************ 00:29:12.126 05:48:15 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:29:12.126 05:48:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:12.126 05:48:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:12.126 05:48:15 -- common/autotest_common.sh@10 -- # set +x 00:29:12.126 ************************************ 00:29:12.126 START TEST dd_invalid_json 00:29:12.126 ************************************ 00:29:12.126 05:48:15 -- common/autotest_common.sh@1104 -- # invalid_json 00:29:12.126 05:48:15 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:29:12.126 05:48:15 -- dd/negative_dd.sh@95 -- # : 00:29:12.126 05:48:15 -- common/autotest_common.sh@640 -- # local es=0 00:29:12.126 05:48:15 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:29:12.126 05:48:15 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:12.126 05:48:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:12.126 05:48:15 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:12.126 05:48:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:12.126 05:48:15 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:12.126 05:48:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:12.126 05:48:15 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:12.126 05:48:15 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:12.126 05:48:15 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:29:12.126 [2024-10-07 05:48:16.002439] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:12.126 [2024-10-07 05:48:16.002683] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180160 ] 00:29:12.385 [2024-10-07 05:48:16.172583] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.645 [2024-10-07 05:48:16.376421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.645 [2024-10-07 05:48:16.376635] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:29:12.645 [2024-10-07 05:48:16.376679] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:12.645 [2024-10-07 05:48:16.376803] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:29:12.904 05:48:16 -- common/autotest_common.sh@643 -- # es=234 00:29:12.904 05:48:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:12.904 05:48:16 -- common/autotest_common.sh@652 -- # es=106 00:29:12.904 05:48:16 -- common/autotest_common.sh@653 -- # case "$es" in 00:29:12.904 05:48:16 -- common/autotest_common.sh@660 -- # es=1 00:29:12.904 05:48:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:12.904 00:29:12.904 real 0m0.794s 00:29:12.904 user 0m0.547s 00:29:12.904 sys 0m0.148s 00:29:12.904 05:48:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:12.904 05:48:16 -- common/autotest_common.sh@10 -- # set +x 00:29:12.904 ************************************ 00:29:12.904 END TEST dd_invalid_json 00:29:12.904 ************************************ 00:29:12.904 00:29:12.904 real 0m6.524s 00:29:12.904 user 0m4.229s 00:29:12.904 sys 0m1.938s 00:29:12.904 05:48:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:12.904 05:48:16 -- common/autotest_common.sh@10 -- # set +x 00:29:12.904 ************************************ 00:29:12.904 END TEST spdk_dd_negative 00:29:12.904 ************************************ 00:29:12.904 ************************************ 00:29:12.904 END TEST spdk_dd 00:29:12.904 ************************************ 00:29:12.904 00:29:12.904 real 2m27.863s 00:29:12.904 user 1m53.345s 00:29:12.904 sys 0m24.043s 00:29:12.904 05:48:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:12.904 05:48:16 -- common/autotest_common.sh@10 -- # set +x 00:29:12.904 05:48:16 -- spdk/autotest.sh@217 -- # '[' 1 -eq 1 ']' 00:29:12.904 05:48:16 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:29:12.904 05:48:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:12.904 05:48:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:12.904 05:48:16 -- common/autotest_common.sh@10 -- # set +x 00:29:12.904 ************************************ 00:29:12.904 START TEST blockdev_nvme 00:29:12.904 ************************************ 00:29:12.904 05:48:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:29:13.163 * Looking for test storage... 00:29:13.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:29:13.163 05:48:16 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:29:13.163 05:48:16 -- bdev/nbd_common.sh@6 -- # set -e 00:29:13.163 05:48:16 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:29:13.163 05:48:16 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:13.163 05:48:16 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:29:13.163 05:48:16 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:29:13.163 05:48:16 -- bdev/blockdev.sh@18 -- # : 00:29:13.163 05:48:16 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:29:13.163 05:48:16 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:29:13.163 05:48:16 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:29:13.163 05:48:16 -- bdev/blockdev.sh@672 -- # uname -s 00:29:13.163 05:48:16 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:29:13.163 05:48:16 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:29:13.163 05:48:16 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:29:13.163 05:48:16 -- bdev/blockdev.sh@681 -- # crypto_device= 00:29:13.163 05:48:16 -- bdev/blockdev.sh@682 -- # dek= 00:29:13.163 05:48:16 -- bdev/blockdev.sh@683 -- # env_ctx= 00:29:13.163 05:48:16 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:29:13.163 05:48:16 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:29:13.163 05:48:16 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:29:13.163 05:48:16 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:29:13.163 05:48:16 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:29:13.163 05:48:16 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=180254 00:29:13.163 05:48:16 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:13.163 05:48:16 -- bdev/blockdev.sh@47 -- # waitforlisten 180254 00:29:13.163 05:48:16 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:29:13.163 05:48:16 -- common/autotest_common.sh@819 -- # '[' -z 180254 ']' 00:29:13.163 05:48:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.163 05:48:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:13.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.163 05:48:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.163 05:48:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:13.163 05:48:16 -- common/autotest_common.sh@10 -- # set +x 00:29:13.163 [2024-10-07 05:48:17.059983] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:13.163 [2024-10-07 05:48:17.060208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180254 ] 00:29:13.422 [2024-10-07 05:48:17.231473] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.681 [2024-10-07 05:48:17.417544] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:13.681 [2024-10-07 05:48:17.418101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.068 05:48:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:15.068 05:48:18 -- common/autotest_common.sh@852 -- # return 0 00:29:15.068 05:48:18 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:29:15.068 05:48:18 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:29:15.068 05:48:18 -- bdev/blockdev.sh@79 -- # local json 00:29:15.068 05:48:18 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:29:15.068 05:48:18 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:15.068 05:48:18 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:29:15.068 05:48:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:15.068 05:48:18 -- common/autotest_common.sh@10 -- # set +x 00:29:15.068 05:48:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:15.068 05:48:18 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:29:15.068 05:48:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:15.068 05:48:18 -- common/autotest_common.sh@10 -- # set +x 00:29:15.068 05:48:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:15.068 05:48:18 -- bdev/blockdev.sh@738 -- # cat 00:29:15.068 05:48:18 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:29:15.068 05:48:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:15.068 05:48:18 -- common/autotest_common.sh@10 -- # set +x 00:29:15.068 05:48:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:15.068 05:48:18 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:29:15.068 05:48:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:15.068 05:48:18 -- common/autotest_common.sh@10 -- # set +x 00:29:15.068 05:48:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:15.068 05:48:18 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:29:15.068 05:48:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:15.068 05:48:18 -- common/autotest_common.sh@10 -- # set +x 00:29:15.068 05:48:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:15.068 05:48:18 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:29:15.068 05:48:18 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:29:15.068 05:48:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:15.068 05:48:18 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:29:15.068 05:48:18 -- common/autotest_common.sh@10 -- # set +x 00:29:15.068 05:48:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:15.068 05:48:18 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:29:15.068 05:48:18 -- bdev/blockdev.sh@747 -- # jq -r .name 00:29:15.068 05:48:18 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "5f9add4d-4182-4d14-ba3d-9fb07e0dc650"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "5f9add4d-4182-4d14-ba3d-9fb07e0dc650",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:29:15.068 05:48:19 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:29:15.068 05:48:19 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:29:15.068 05:48:19 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:29:15.068 05:48:19 -- bdev/blockdev.sh@752 -- # killprocess 180254 00:29:15.068 05:48:19 -- common/autotest_common.sh@926 -- # '[' -z 180254 ']' 00:29:15.068 05:48:19 -- common/autotest_common.sh@930 -- # kill -0 180254 00:29:15.068 05:48:19 -- common/autotest_common.sh@931 -- # uname 00:29:15.068 05:48:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:15.068 05:48:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 180254 00:29:15.068 05:48:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:15.068 05:48:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:15.068 05:48:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 180254' 00:29:15.068 killing process with pid 180254 00:29:15.068 05:48:19 -- common/autotest_common.sh@945 -- # kill 180254 00:29:15.068 05:48:19 -- common/autotest_common.sh@950 -- # wait 180254 00:29:16.975 05:48:20 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:16.975 05:48:20 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:29:16.975 05:48:20 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:29:16.975 05:48:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:16.975 05:48:20 -- common/autotest_common.sh@10 -- # set +x 00:29:17.234 ************************************ 00:29:17.234 START TEST bdev_hello_world 00:29:17.234 ************************************ 00:29:17.234 05:48:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:29:17.234 [2024-10-07 05:48:21.023201] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:17.234 [2024-10-07 05:48:21.023433] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180347 ] 00:29:17.234 [2024-10-07 05:48:21.189908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.504 [2024-10-07 05:48:21.406442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:18.098 [2024-10-07 05:48:21.824067] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:29:18.098 [2024-10-07 05:48:21.824350] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:29:18.098 [2024-10-07 05:48:21.824422] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:29:18.098 [2024-10-07 05:48:21.827109] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:29:18.098 [2024-10-07 05:48:21.827665] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:29:18.098 [2024-10-07 05:48:21.827848] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:29:18.098 [2024-10-07 05:48:21.828204] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:29:18.098 00:29:18.098 [2024-10-07 05:48:21.828395] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:29:19.034 00:29:19.034 real 0m1.895s 00:29:19.034 user 0m1.495s 00:29:19.034 sys 0m0.300s 00:29:19.034 05:48:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:19.034 05:48:22 -- common/autotest_common.sh@10 -- # set +x 00:29:19.034 ************************************ 00:29:19.034 END TEST bdev_hello_world 00:29:19.034 ************************************ 00:29:19.034 05:48:22 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:29:19.034 05:48:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:19.034 05:48:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:19.034 05:48:22 -- common/autotest_common.sh@10 -- # set +x 00:29:19.034 ************************************ 00:29:19.034 START TEST bdev_bounds 00:29:19.034 ************************************ 00:29:19.034 05:48:22 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:29:19.034 05:48:22 -- bdev/blockdev.sh@288 -- # bdevio_pid=180397 00:29:19.034 05:48:22 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:29:19.034 05:48:22 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 180397' 00:29:19.034 Process bdevio pid: 180397 00:29:19.034 05:48:22 -- bdev/blockdev.sh@291 -- # waitforlisten 180397 00:29:19.034 05:48:22 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:19.034 05:48:22 -- common/autotest_common.sh@819 -- # '[' -z 180397 ']' 00:29:19.034 05:48:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.034 05:48:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:19.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.035 05:48:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.035 05:48:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:19.035 05:48:22 -- common/autotest_common.sh@10 -- # set +x 00:29:19.035 [2024-10-07 05:48:22.970414] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:19.035 [2024-10-07 05:48:22.970787] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180397 ] 00:29:19.294 [2024-10-07 05:48:23.134643] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:19.553 [2024-10-07 05:48:23.335822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:19.553 [2024-10-07 05:48:23.335958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:19.553 [2024-10-07 05:48:23.336245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.121 05:48:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:20.122 05:48:23 -- common/autotest_common.sh@852 -- # return 0 00:29:20.122 05:48:23 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:20.122 I/O targets: 00:29:20.122 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:29:20.122 00:29:20.122 00:29:20.122 CUnit - A unit testing framework for C - Version 2.1-3 00:29:20.122 http://cunit.sourceforge.net/ 00:29:20.122 00:29:20.122 00:29:20.122 Suite: bdevio tests on: Nvme0n1 00:29:20.122 Test: blockdev write read block ...passed 00:29:20.122 Test: blockdev write zeroes read block ...passed 00:29:20.122 Test: blockdev write zeroes read no split ...passed 00:29:20.122 Test: blockdev write zeroes read split ...passed 00:29:20.122 Test: blockdev write zeroes read split partial ...passed 00:29:20.122 Test: blockdev reset ...[2024-10-07 05:48:24.078362] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:29:20.122 [2024-10-07 05:48:24.081627] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:20.122 passed 00:29:20.122 Test: blockdev write read 8 blocks ...passed 00:29:20.122 Test: blockdev write read size > 128k ...passed 00:29:20.122 Test: blockdev write read invalid size ...passed 00:29:20.122 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:20.122 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:20.122 Test: blockdev write read max offset ...passed 00:29:20.122 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:20.122 Test: blockdev writev readv 8 blocks ...passed 00:29:20.122 Test: blockdev writev readv 30 x 1block ...passed 00:29:20.122 Test: blockdev writev readv block ...passed 00:29:20.122 Test: blockdev writev readv size > 128k ...passed 00:29:20.122 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:20.122 Test: blockdev comparev and writev ...[2024-10-07 05:48:24.089334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x1040d000 len:0x1000 00:29:20.122 [2024-10-07 05:48:24.089406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:20.122 passed 00:29:20.122 Test: blockdev nvme passthru rw ...passed 00:29:20.122 Test: blockdev nvme passthru vendor specific ...[2024-10-07 05:48:24.090281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:20.122 passed 00:29:20.122 Test: blockdev nvme admin passthru ...[2024-10-07 05:48:24.090330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:20.122 passed 00:29:20.122 Test: blockdev copy ...passed 00:29:20.122 00:29:20.122 Run Summary: Type Total Ran Passed Failed Inactive 00:29:20.122 suites 1 1 n/a 0 0 00:29:20.122 tests 23 23 23 0 0 00:29:20.122 asserts 152 152 152 0 n/a 00:29:20.122 00:29:20.122 Elapsed time = 0.185 seconds 00:29:20.381 0 00:29:20.381 05:48:24 -- bdev/blockdev.sh@293 -- # killprocess 180397 00:29:20.381 05:48:24 -- common/autotest_common.sh@926 -- # '[' -z 180397 ']' 00:29:20.381 05:48:24 -- common/autotest_common.sh@930 -- # kill -0 180397 00:29:20.381 05:48:24 -- common/autotest_common.sh@931 -- # uname 00:29:20.381 05:48:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:20.381 05:48:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 180397 00:29:20.381 05:48:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:20.381 05:48:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:20.381 05:48:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 180397' 00:29:20.381 killing process with pid 180397 00:29:20.381 05:48:24 -- common/autotest_common.sh@945 -- # kill 180397 00:29:20.381 05:48:24 -- common/autotest_common.sh@950 -- # wait 180397 00:29:21.319 05:48:25 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:29:21.319 00:29:21.319 real 0m2.268s 00:29:21.319 user 0m5.326s 00:29:21.319 sys 0m0.374s 00:29:21.319 05:48:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:21.319 05:48:25 -- common/autotest_common.sh@10 -- # set +x 00:29:21.319 ************************************ 00:29:21.319 END TEST bdev_bounds 00:29:21.319 ************************************ 00:29:21.319 05:48:25 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:29:21.319 05:48:25 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:29:21.319 05:48:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:21.319 05:48:25 -- common/autotest_common.sh@10 -- # set +x 00:29:21.319 ************************************ 00:29:21.319 START TEST bdev_nbd 00:29:21.319 ************************************ 00:29:21.319 05:48:25 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:29:21.319 05:48:25 -- bdev/blockdev.sh@298 -- # uname -s 00:29:21.319 05:48:25 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:29:21.319 05:48:25 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:21.319 05:48:25 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:21.319 05:48:25 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1') 00:29:21.319 05:48:25 -- bdev/blockdev.sh@302 -- # local bdev_all 00:29:21.319 05:48:25 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:29:21.319 05:48:25 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:29:21.319 05:48:25 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:29:21.319 05:48:25 -- bdev/blockdev.sh@309 -- # local nbd_all 00:29:21.319 05:48:25 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:29:21.319 05:48:25 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0') 00:29:21.319 05:48:25 -- bdev/blockdev.sh@312 -- # local nbd_list 00:29:21.319 05:48:25 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1') 00:29:21.319 05:48:25 -- bdev/blockdev.sh@313 -- # local bdev_list 00:29:21.319 05:48:25 -- bdev/blockdev.sh@316 -- # nbd_pid=180462 00:29:21.319 05:48:25 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:29:21.319 05:48:25 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:21.319 05:48:25 -- bdev/blockdev.sh@318 -- # waitforlisten 180462 /var/tmp/spdk-nbd.sock 00:29:21.319 05:48:25 -- common/autotest_common.sh@819 -- # '[' -z 180462 ']' 00:29:21.319 05:48:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:21.319 05:48:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:21.319 05:48:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:21.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:21.319 05:48:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:21.319 05:48:25 -- common/autotest_common.sh@10 -- # set +x 00:29:21.319 [2024-10-07 05:48:25.295253] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:21.319 [2024-10-07 05:48:25.295452] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:21.578 [2024-10-07 05:48:25.446186] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.836 [2024-10-07 05:48:25.632798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.404 05:48:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:22.404 05:48:26 -- common/autotest_common.sh@852 -- # return 0 00:29:22.404 05:48:26 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:29:22.404 05:48:26 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:22.404 05:48:26 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1') 00:29:22.404 05:48:26 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:29:22.404 05:48:26 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:29:22.404 05:48:26 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:22.404 05:48:26 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1') 00:29:22.404 05:48:26 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:29:22.404 05:48:26 -- bdev/nbd_common.sh@24 -- # local i 00:29:22.404 05:48:26 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:29:22.404 05:48:26 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:29:22.404 05:48:26 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:29:22.404 05:48:26 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:29:22.662 05:48:26 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:29:22.662 05:48:26 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:29:22.662 05:48:26 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:29:22.662 05:48:26 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:29:22.662 05:48:26 -- common/autotest_common.sh@857 -- # local i 00:29:22.662 05:48:26 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:29:22.662 05:48:26 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:29:22.662 05:48:26 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:29:22.662 05:48:26 -- common/autotest_common.sh@861 -- # break 00:29:22.662 05:48:26 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:29:22.662 05:48:26 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:29:22.662 05:48:26 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:22.662 1+0 records in 00:29:22.662 1+0 records out 00:29:22.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000506922 s, 8.1 MB/s 00:29:22.662 05:48:26 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:22.662 05:48:26 -- common/autotest_common.sh@874 -- # size=4096 00:29:22.662 05:48:26 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:22.662 05:48:26 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:29:22.662 05:48:26 -- common/autotest_common.sh@877 -- # return 0 00:29:22.662 05:48:26 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:22.662 05:48:26 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:29:22.662 05:48:26 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:22.920 05:48:26 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:29:22.920 { 00:29:22.920 "nbd_device": "/dev/nbd0", 00:29:22.920 "bdev_name": "Nvme0n1" 00:29:22.920 } 00:29:22.920 ]' 00:29:22.920 05:48:26 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:29:22.920 05:48:26 -- bdev/nbd_common.sh@119 -- # echo '[ 00:29:22.920 { 00:29:22.920 "nbd_device": "/dev/nbd0", 00:29:22.920 "bdev_name": "Nvme0n1" 00:29:22.920 } 00:29:22.920 ]' 00:29:22.920 05:48:26 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:29:22.920 05:48:26 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:22.920 05:48:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:22.920 05:48:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:22.920 05:48:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:22.920 05:48:26 -- bdev/nbd_common.sh@51 -- # local i 00:29:22.920 05:48:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:22.920 05:48:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:23.177 05:48:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:23.177 05:48:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:23.177 05:48:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:23.177 05:48:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:23.177 05:48:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:23.177 05:48:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:23.177 05:48:27 -- bdev/nbd_common.sh@41 -- # break 00:29:23.177 05:48:27 -- bdev/nbd_common.sh@45 -- # return 0 00:29:23.177 05:48:27 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:23.177 05:48:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:23.177 05:48:27 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:23.433 05:48:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:23.433 05:48:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:23.433 05:48:27 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:23.433 05:48:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:23.433 05:48:27 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:23.433 05:48:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:23.433 05:48:27 -- bdev/nbd_common.sh@65 -- # true 00:29:23.433 05:48:27 -- bdev/nbd_common.sh@65 -- # count=0 00:29:23.433 05:48:27 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:23.433 05:48:27 -- bdev/nbd_common.sh@122 -- # count=0 00:29:23.433 05:48:27 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:29:23.433 05:48:27 -- bdev/nbd_common.sh@127 -- # return 0 00:29:23.433 05:48:27 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:29:23.433 05:48:27 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:23.433 05:48:27 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1') 00:29:23.433 05:48:27 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:23.433 05:48:27 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:29:23.433 05:48:27 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:23.433 05:48:27 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:29:23.433 05:48:27 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:23.433 05:48:27 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1') 00:29:23.433 05:48:27 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:23.433 05:48:27 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:23.433 05:48:27 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:23.433 05:48:27 -- bdev/nbd_common.sh@12 -- # local i 00:29:23.433 05:48:27 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:23.433 05:48:27 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:23.433 05:48:27 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:29:23.691 /dev/nbd0 00:29:23.691 05:48:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:23.691 05:48:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:23.691 05:48:27 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:29:23.691 05:48:27 -- common/autotest_common.sh@857 -- # local i 00:29:23.691 05:48:27 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:29:23.691 05:48:27 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:29:23.691 05:48:27 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:29:23.691 05:48:27 -- common/autotest_common.sh@861 -- # break 00:29:23.691 05:48:27 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:29:23.691 05:48:27 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:29:23.691 05:48:27 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:23.691 1+0 records in 00:29:23.691 1+0 records out 00:29:23.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000550698 s, 7.4 MB/s 00:29:23.691 05:48:27 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:23.691 05:48:27 -- common/autotest_common.sh@874 -- # size=4096 00:29:23.691 05:48:27 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:23.691 05:48:27 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:29:23.691 05:48:27 -- common/autotest_common.sh@877 -- # return 0 00:29:23.691 05:48:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:23.691 05:48:27 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:23.691 05:48:27 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:23.691 05:48:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:23.691 05:48:27 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:23.949 05:48:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:23.950 { 00:29:23.950 "nbd_device": "/dev/nbd0", 00:29:23.950 "bdev_name": "Nvme0n1" 00:29:23.950 } 00:29:23.950 ]' 00:29:23.950 05:48:27 -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:23.950 { 00:29:23.950 "nbd_device": "/dev/nbd0", 00:29:23.950 "bdev_name": "Nvme0n1" 00:29:23.950 } 00:29:23.950 ]' 00:29:23.950 05:48:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:23.950 05:48:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:29:23.950 05:48:27 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:29:23.950 05:48:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:23.950 05:48:27 -- bdev/nbd_common.sh@65 -- # count=1 00:29:23.950 05:48:27 -- bdev/nbd_common.sh@66 -- # echo 1 00:29:23.950 05:48:27 -- bdev/nbd_common.sh@95 -- # count=1 00:29:23.950 05:48:27 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:29:23.950 05:48:27 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:29:23.950 05:48:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:29:23.950 05:48:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:23.950 05:48:27 -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:23.950 05:48:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:23.950 05:48:27 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:23.950 05:48:27 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:29:24.207 256+0 records in 00:29:24.208 256+0 records out 00:29:24.208 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00978277 s, 107 MB/s 00:29:24.208 05:48:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:24.208 05:48:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:24.208 256+0 records in 00:29:24.208 256+0 records out 00:29:24.208 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0664083 s, 15.8 MB/s 00:29:24.208 05:48:28 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:29:24.208 05:48:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:29:24.208 05:48:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:24.208 05:48:28 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:24.208 05:48:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:24.208 05:48:28 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:24.208 05:48:28 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:24.208 05:48:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:24.208 05:48:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:29:24.208 05:48:28 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:24.208 05:48:28 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:24.208 05:48:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:24.208 05:48:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:24.208 05:48:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:24.208 05:48:28 -- bdev/nbd_common.sh@51 -- # local i 00:29:24.208 05:48:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:24.208 05:48:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:24.466 05:48:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:24.466 05:48:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:24.466 05:48:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:24.466 05:48:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:24.466 05:48:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:24.466 05:48:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:24.466 05:48:28 -- bdev/nbd_common.sh@41 -- # break 00:29:24.466 05:48:28 -- bdev/nbd_common.sh@45 -- # return 0 00:29:24.466 05:48:28 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:24.466 05:48:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:24.466 05:48:28 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:24.466 05:48:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:24.466 05:48:28 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:24.466 05:48:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:24.723 05:48:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:24.723 05:48:28 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:24.723 05:48:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:24.723 05:48:28 -- bdev/nbd_common.sh@65 -- # true 00:29:24.723 05:48:28 -- bdev/nbd_common.sh@65 -- # count=0 00:29:24.723 05:48:28 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:24.723 05:48:28 -- bdev/nbd_common.sh@104 -- # count=0 00:29:24.723 05:48:28 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:24.723 05:48:28 -- bdev/nbd_common.sh@109 -- # return 0 00:29:24.723 05:48:28 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:24.723 05:48:28 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:24.723 05:48:28 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:29:24.723 05:48:28 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:29:24.723 05:48:28 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:29:24.723 05:48:28 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:29:24.982 malloc_lvol_verify 00:29:24.982 05:48:28 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:29:24.982 8d7b5c4f-b50b-4f9a-80e1-fd5b7ac1491f 00:29:24.982 05:48:28 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:29:25.240 a80e7e94-f6b4-4da5-b987-39464dd0e445 00:29:25.240 05:48:29 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:29:25.500 /dev/nbd0 00:29:25.500 05:48:29 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:29:25.500 mke2fs 1.46.5 (30-Dec-2021) 00:29:25.500 00:29:25.500 Filesystem too small for a journal 00:29:25.500 Discarding device blocks: 0/1024 done 00:29:25.500 Creating filesystem with 1024 4k blocks and 1024 inodes 00:29:25.500 00:29:25.500 Allocating group tables: 0/1 done 00:29:25.500 Writing inode tables: 0/1 done 00:29:25.500 Writing superblocks and filesystem accounting information: 0/1 done 00:29:25.500 00:29:25.500 05:48:29 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:29:25.500 05:48:29 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:25.500 05:48:29 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:25.500 05:48:29 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:25.500 05:48:29 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:25.500 05:48:29 -- bdev/nbd_common.sh@51 -- # local i 00:29:25.500 05:48:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:25.500 05:48:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:25.759 05:48:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:25.759 05:48:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:25.759 05:48:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:25.759 05:48:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:25.759 05:48:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:25.759 05:48:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:25.759 05:48:29 -- bdev/nbd_common.sh@41 -- # break 00:29:25.759 05:48:29 -- bdev/nbd_common.sh@45 -- # return 0 00:29:25.759 05:48:29 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:29:25.759 05:48:29 -- bdev/nbd_common.sh@147 -- # return 0 00:29:25.759 05:48:29 -- bdev/blockdev.sh@324 -- # killprocess 180462 00:29:25.759 05:48:29 -- common/autotest_common.sh@926 -- # '[' -z 180462 ']' 00:29:25.759 05:48:29 -- common/autotest_common.sh@930 -- # kill -0 180462 00:29:25.759 05:48:29 -- common/autotest_common.sh@931 -- # uname 00:29:25.759 05:48:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:25.759 05:48:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 180462 00:29:25.759 05:48:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:25.759 killing process with pid 180462 00:29:25.759 05:48:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:25.759 05:48:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 180462' 00:29:25.759 05:48:29 -- common/autotest_common.sh@945 -- # kill 180462 00:29:25.759 05:48:29 -- common/autotest_common.sh@950 -- # wait 180462 00:29:26.694 05:48:30 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:29:26.694 00:29:26.694 real 0m5.245s 00:29:26.694 user 0m7.576s 00:29:26.694 sys 0m1.050s 00:29:26.694 05:48:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:26.694 05:48:30 -- common/autotest_common.sh@10 -- # set +x 00:29:26.694 ************************************ 00:29:26.694 END TEST bdev_nbd 00:29:26.694 ************************************ 00:29:26.694 05:48:30 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:29:26.694 05:48:30 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:29:26.694 skipping fio tests on NVMe due to multi-ns failures. 00:29:26.694 05:48:30 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:29:26.694 05:48:30 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:26.694 05:48:30 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:26.694 05:48:30 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:29:26.694 05:48:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:26.694 05:48:30 -- common/autotest_common.sh@10 -- # set +x 00:29:26.694 ************************************ 00:29:26.694 START TEST bdev_verify 00:29:26.694 ************************************ 00:29:26.694 05:48:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:26.694 [2024-10-07 05:48:30.606084] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:26.694 [2024-10-07 05:48:30.606285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180647 ] 00:29:26.953 [2024-10-07 05:48:30.783342] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:27.212 [2024-10-07 05:48:31.014483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.212 [2024-10-07 05:48:31.014479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.471 Running I/O for 5 seconds... 00:29:32.744 00:29:32.744 Latency(us) 00:29:32.744 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.744 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:32.744 Verification LBA range: start 0x0 length 0xa0000 00:29:32.744 Nvme0n1 : 5.01 15603.23 60.95 0.00 0.00 8169.34 404.01 17277.67 00:29:32.744 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:32.744 Verification LBA range: start 0xa0000 length 0xa0000 00:29:32.744 Nvme0n1 : 5.01 15649.81 61.13 0.00 0.00 8145.28 938.36 16205.27 00:29:32.744 =================================================================================================================== 00:29:32.744 Total : 31253.05 122.08 0.00 0.00 8157.29 404.01 17277.67 00:29:39.309 00:29:39.309 real 0m12.504s 00:29:39.309 user 0m23.648s 00:29:39.309 sys 0m0.405s 00:29:39.309 05:48:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:39.309 05:48:43 -- common/autotest_common.sh@10 -- # set +x 00:29:39.309 ************************************ 00:29:39.309 END TEST bdev_verify 00:29:39.309 ************************************ 00:29:39.309 05:48:43 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:39.309 05:48:43 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:29:39.309 05:48:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:39.309 05:48:43 -- common/autotest_common.sh@10 -- # set +x 00:29:39.309 ************************************ 00:29:39.309 START TEST bdev_verify_big_io 00:29:39.309 ************************************ 00:29:39.309 05:48:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:39.309 [2024-10-07 05:48:43.149506] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:39.309 [2024-10-07 05:48:43.149703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180797 ] 00:29:39.568 [2024-10-07 05:48:43.306554] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:39.568 [2024-10-07 05:48:43.491030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.568 [2024-10-07 05:48:43.491041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.136 Running I/O for 5 seconds... 00:29:45.444 00:29:45.444 Latency(us) 00:29:45.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:45.444 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:45.444 Verification LBA range: start 0x0 length 0xa000 00:29:45.444 Nvme0n1 : 5.02 2691.30 168.21 0.00 0.00 47023.39 644.19 76736.70 00:29:45.444 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:45.444 Verification LBA range: start 0xa000 length 0xa000 00:29:45.444 Nvme0n1 : 5.03 2311.89 144.49 0.00 0.00 54660.07 640.47 83409.45 00:29:45.444 =================================================================================================================== 00:29:45.444 Total : 5003.19 312.70 0.00 0.00 50554.50 640.47 83409.45 00:29:46.825 00:29:46.825 real 0m7.318s 00:29:46.825 user 0m13.478s 00:29:46.825 sys 0m0.289s 00:29:46.825 05:48:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:46.825 05:48:50 -- common/autotest_common.sh@10 -- # set +x 00:29:46.825 ************************************ 00:29:46.825 END TEST bdev_verify_big_io 00:29:46.825 ************************************ 00:29:46.825 05:48:50 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:46.825 05:48:50 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:29:46.825 05:48:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:46.825 05:48:50 -- common/autotest_common.sh@10 -- # set +x 00:29:46.825 ************************************ 00:29:46.825 START TEST bdev_write_zeroes 00:29:46.825 ************************************ 00:29:46.825 05:48:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:46.825 [2024-10-07 05:48:50.525951] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:46.825 [2024-10-07 05:48:50.526104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180905 ] 00:29:46.825 [2024-10-07 05:48:50.680298] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.085 [2024-10-07 05:48:50.866932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.343 Running I/O for 1 seconds... 00:29:48.716 00:29:48.716 Latency(us) 00:29:48.716 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.716 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:48.716 Nvme0n1 : 1.00 63689.10 248.79 0.00 0.00 2004.57 752.17 13702.98 00:29:48.716 =================================================================================================================== 00:29:48.716 Total : 63689.10 248.79 0.00 0.00 2004.57 752.17 13702.98 00:29:49.653 00:29:49.653 real 0m2.825s 00:29:49.653 user 0m2.452s 00:29:49.653 sys 0m0.273s 00:29:49.653 05:48:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:49.653 05:48:53 -- common/autotest_common.sh@10 -- # set +x 00:29:49.653 ************************************ 00:29:49.653 END TEST bdev_write_zeroes 00:29:49.653 ************************************ 00:29:49.653 05:48:53 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:49.653 05:48:53 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:29:49.653 05:48:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:49.653 05:48:53 -- common/autotest_common.sh@10 -- # set +x 00:29:49.653 ************************************ 00:29:49.653 START TEST bdev_json_nonenclosed 00:29:49.653 ************************************ 00:29:49.653 05:48:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:49.653 [2024-10-07 05:48:53.399529] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:49.653 [2024-10-07 05:48:53.399875] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180962 ] 00:29:49.653 [2024-10-07 05:48:53.552629] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.912 [2024-10-07 05:48:53.739535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.912 [2024-10-07 05:48:53.740034] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:29:49.912 [2024-10-07 05:48:53.740187] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:50.171 00:29:50.171 real 0m0.726s 00:29:50.171 user 0m0.489s 00:29:50.171 sys 0m0.137s 00:29:50.171 05:48:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:50.171 05:48:54 -- common/autotest_common.sh@10 -- # set +x 00:29:50.171 ************************************ 00:29:50.171 END TEST bdev_json_nonenclosed 00:29:50.171 ************************************ 00:29:50.171 05:48:54 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:50.171 05:48:54 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:29:50.171 05:48:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:50.171 05:48:54 -- common/autotest_common.sh@10 -- # set +x 00:29:50.171 ************************************ 00:29:50.171 START TEST bdev_json_nonarray 00:29:50.171 ************************************ 00:29:50.171 05:48:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:50.430 [2024-10-07 05:48:54.182465] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:50.430 [2024-10-07 05:48:54.182797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180992 ] 00:29:50.430 [2024-10-07 05:48:54.336137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.688 [2024-10-07 05:48:54.519735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.688 [2024-10-07 05:48:54.520275] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:29:50.688 [2024-10-07 05:48:54.520446] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:50.947 ************************************ 00:29:50.947 END TEST bdev_json_nonarray 00:29:50.947 ************************************ 00:29:50.947 00:29:50.947 real 0m0.723s 00:29:50.947 user 0m0.490s 00:29:50.947 sys 0m0.133s 00:29:50.947 05:48:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:50.947 05:48:54 -- common/autotest_common.sh@10 -- # set +x 00:29:50.947 05:48:54 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:29:50.947 05:48:54 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:29:50.947 05:48:54 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:29:50.947 05:48:54 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:29:50.947 05:48:54 -- bdev/blockdev.sh@809 -- # cleanup 00:29:50.947 05:48:54 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:29:50.947 05:48:54 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:50.947 05:48:54 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:29:50.947 05:48:54 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:29:50.947 05:48:54 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:29:50.947 05:48:54 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:29:50.947 ************************************ 00:29:50.947 END TEST blockdev_nvme 00:29:50.947 ************************************ 00:29:50.947 00:29:50.947 real 0m38.038s 00:29:50.947 user 0m59.414s 00:29:50.947 sys 0m3.850s 00:29:50.947 05:48:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:50.947 05:48:54 -- common/autotest_common.sh@10 -- # set +x 00:29:51.207 05:48:54 -- spdk/autotest.sh@219 -- # uname -s 00:29:51.207 05:48:54 -- spdk/autotest.sh@219 -- # [[ Linux == Linux ]] 00:29:51.207 05:48:54 -- spdk/autotest.sh@220 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:29:51.207 05:48:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:51.207 05:48:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:51.207 05:48:54 -- common/autotest_common.sh@10 -- # set +x 00:29:51.207 ************************************ 00:29:51.207 START TEST blockdev_nvme_gpt 00:29:51.207 ************************************ 00:29:51.207 05:48:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:29:51.207 * Looking for test storage... 00:29:51.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:29:51.207 05:48:55 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:29:51.207 05:48:55 -- bdev/nbd_common.sh@6 -- # set -e 00:29:51.207 05:48:55 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:29:51.207 05:48:55 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:51.207 05:48:55 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:29:51.207 05:48:55 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:29:51.207 05:48:55 -- bdev/blockdev.sh@18 -- # : 00:29:51.207 05:48:55 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:29:51.207 05:48:55 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:29:51.207 05:48:55 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:29:51.207 05:48:55 -- bdev/blockdev.sh@672 -- # uname -s 00:29:51.207 05:48:55 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:29:51.207 05:48:55 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:29:51.207 05:48:55 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:29:51.207 05:48:55 -- bdev/blockdev.sh@681 -- # crypto_device= 00:29:51.207 05:48:55 -- bdev/blockdev.sh@682 -- # dek= 00:29:51.207 05:48:55 -- bdev/blockdev.sh@683 -- # env_ctx= 00:29:51.207 05:48:55 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:29:51.207 05:48:55 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:29:51.207 05:48:55 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:29:51.207 05:48:55 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:29:51.207 05:48:55 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:29:51.207 05:48:55 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=181069 00:29:51.207 05:48:55 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:51.207 05:48:55 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:29:51.207 05:48:55 -- bdev/blockdev.sh@47 -- # waitforlisten 181069 00:29:51.207 05:48:55 -- common/autotest_common.sh@819 -- # '[' -z 181069 ']' 00:29:51.207 05:48:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.207 05:48:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:51.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.207 05:48:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.207 05:48:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:51.207 05:48:55 -- common/autotest_common.sh@10 -- # set +x 00:29:51.207 [2024-10-07 05:48:55.108303] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:51.207 [2024-10-07 05:48:55.108453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181069 ] 00:29:51.467 [2024-10-07 05:48:55.264120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.726 [2024-10-07 05:48:55.473073] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:51.726 [2024-10-07 05:48:55.473291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.104 05:48:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:53.104 05:48:56 -- common/autotest_common.sh@852 -- # return 0 00:29:53.104 05:48:56 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:29:53.104 05:48:56 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:29:53.104 05:48:56 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:53.104 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:29:53.104 Waiting for block devices as requested 00:29:53.104 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:29:53.362 05:48:57 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:29:53.362 05:48:57 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:29:53.362 05:48:57 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:29:53.362 05:48:57 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:29:53.362 05:48:57 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:29:53.362 05:48:57 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:29:53.362 05:48:57 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:29:53.362 05:48:57 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:53.362 05:48:57 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:29:53.362 05:48:57 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme0/nvme0n1') 00:29:53.362 05:48:57 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:29:53.362 05:48:57 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:29:53.362 05:48:57 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:29:53.362 05:48:57 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:29:53.362 05:48:57 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme0n1 00:29:53.362 05:48:57 -- bdev/blockdev.sh@111 -- # parted /dev/nvme0n1 -ms print 00:29:53.362 05:48:57 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:29:53.362 BYT; 00:29:53.362 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:29:53.362 05:48:57 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:29:53.362 BYT; 00:29:53.362 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:29:53.362 05:48:57 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme0n1 00:29:53.362 05:48:57 -- bdev/blockdev.sh@114 -- # break 00:29:53.362 05:48:57 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme0n1 ]] 00:29:53.362 05:48:57 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:29:53.362 05:48:57 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:29:53.363 05:48:57 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:29:53.621 05:48:57 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:29:53.621 05:48:57 -- scripts/common.sh@410 -- # local spdk_guid 00:29:53.621 05:48:57 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:29:53.621 05:48:57 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:53.621 05:48:57 -- scripts/common.sh@415 -- # IFS='()' 00:29:53.621 05:48:57 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:29:53.621 05:48:57 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:53.621 05:48:57 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:29:53.621 05:48:57 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:29:53.621 05:48:57 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:29:53.621 05:48:57 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:29:53.621 05:48:57 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:29:53.621 05:48:57 -- scripts/common.sh@422 -- # local spdk_guid 00:29:53.621 05:48:57 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:29:53.621 05:48:57 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:53.621 05:48:57 -- scripts/common.sh@427 -- # IFS='()' 00:29:53.621 05:48:57 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:29:53.621 05:48:57 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:53.621 05:48:57 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:29:53.621 05:48:57 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:29:53.621 05:48:57 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:29:53.621 05:48:57 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:29:53.622 05:48:57 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:29:54.557 The operation has completed successfully. 00:29:54.557 05:48:58 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:29:55.932 The operation has completed successfully. 00:29:55.932 05:48:59 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:55.932 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:29:56.191 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:29:57.127 05:49:00 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:29:57.127 05:49:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:57.127 05:49:00 -- common/autotest_common.sh@10 -- # set +x 00:29:57.127 [] 00:29:57.127 05:49:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:57.127 05:49:00 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:29:57.127 05:49:00 -- bdev/blockdev.sh@79 -- # local json 00:29:57.127 05:49:00 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:29:57.127 05:49:00 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:57.127 05:49:00 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:29:57.127 05:49:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:57.127 05:49:00 -- common/autotest_common.sh@10 -- # set +x 00:29:57.127 05:49:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:57.127 05:49:00 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:29:57.127 05:49:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:57.127 05:49:00 -- common/autotest_common.sh@10 -- # set +x 00:29:57.127 05:49:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:57.127 05:49:00 -- bdev/blockdev.sh@738 -- # cat 00:29:57.127 05:49:00 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:29:57.127 05:49:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:57.127 05:49:00 -- common/autotest_common.sh@10 -- # set +x 00:29:57.127 05:49:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:57.127 05:49:00 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:29:57.127 05:49:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:57.127 05:49:00 -- common/autotest_common.sh@10 -- # set +x 00:29:57.127 05:49:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:57.127 05:49:01 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:29:57.127 05:49:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:57.127 05:49:01 -- common/autotest_common.sh@10 -- # set +x 00:29:57.127 05:49:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:57.127 05:49:01 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:29:57.127 05:49:01 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:29:57.127 05:49:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:57.127 05:49:01 -- common/autotest_common.sh@10 -- # set +x 00:29:57.127 05:49:01 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:29:57.127 05:49:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:57.127 05:49:01 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:29:57.127 05:49:01 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:29:57.127 05:49:01 -- bdev/blockdev.sh@747 -- # jq -r .name 00:29:57.387 05:49:01 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:29:57.387 05:49:01 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:29:57.387 05:49:01 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:29:57.387 05:49:01 -- bdev/blockdev.sh@752 -- # killprocess 181069 00:29:57.387 05:49:01 -- common/autotest_common.sh@926 -- # '[' -z 181069 ']' 00:29:57.387 05:49:01 -- common/autotest_common.sh@930 -- # kill -0 181069 00:29:57.387 05:49:01 -- common/autotest_common.sh@931 -- # uname 00:29:57.387 05:49:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:57.387 05:49:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 181069 00:29:57.387 05:49:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:57.387 05:49:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:57.387 killing process with pid 181069 00:29:57.387 05:49:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 181069' 00:29:57.387 05:49:01 -- common/autotest_common.sh@945 -- # kill 181069 00:29:57.387 05:49:01 -- common/autotest_common.sh@950 -- # wait 181069 00:29:59.291 05:49:03 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:59.291 05:49:03 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:29:59.291 05:49:03 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:29:59.291 05:49:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:59.291 05:49:03 -- common/autotest_common.sh@10 -- # set +x 00:29:59.291 ************************************ 00:29:59.291 START TEST bdev_hello_world 00:29:59.291 ************************************ 00:29:59.291 05:49:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:29:59.291 [2024-10-07 05:49:03.148796] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:29:59.291 [2024-10-07 05:49:03.149077] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181513 ] 00:29:59.549 [2024-10-07 05:49:03.319761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.549 [2024-10-07 05:49:03.512542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.117 [2024-10-07 05:49:03.921526] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:30:00.117 [2024-10-07 05:49:03.921618] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:30:00.117 [2024-10-07 05:49:03.921652] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:30:00.117 [2024-10-07 05:49:03.924156] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:30:00.117 [2024-10-07 05:49:03.924537] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:30:00.117 [2024-10-07 05:49:03.924589] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:30:00.117 [2024-10-07 05:49:03.924857] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:30:00.117 00:30:00.117 [2024-10-07 05:49:03.924895] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:30:01.053 00:30:01.053 real 0m1.850s 00:30:01.053 user 0m1.467s 00:30:01.053 sys 0m0.284s 00:30:01.053 05:49:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:01.053 05:49:04 -- common/autotest_common.sh@10 -- # set +x 00:30:01.053 ************************************ 00:30:01.053 END TEST bdev_hello_world 00:30:01.053 ************************************ 00:30:01.053 05:49:04 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:30:01.053 05:49:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:01.053 05:49:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:01.053 05:49:04 -- common/autotest_common.sh@10 -- # set +x 00:30:01.053 ************************************ 00:30:01.053 START TEST bdev_bounds 00:30:01.053 ************************************ 00:30:01.053 05:49:04 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:30:01.053 05:49:04 -- bdev/blockdev.sh@288 -- # bdevio_pid=181557 00:30:01.053 05:49:04 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:30:01.053 05:49:04 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:30:01.053 05:49:04 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 181557' 00:30:01.053 Process bdevio pid: 181557 00:30:01.053 05:49:04 -- bdev/blockdev.sh@291 -- # waitforlisten 181557 00:30:01.053 05:49:04 -- common/autotest_common.sh@819 -- # '[' -z 181557 ']' 00:30:01.053 05:49:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:01.053 05:49:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:01.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:01.053 05:49:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:01.053 05:49:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:01.053 05:49:04 -- common/autotest_common.sh@10 -- # set +x 00:30:01.313 [2024-10-07 05:49:05.062930] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:01.313 [2024-10-07 05:49:05.063135] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181557 ] 00:30:01.313 [2024-10-07 05:49:05.241293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:01.572 [2024-10-07 05:49:05.429572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:01.572 [2024-10-07 05:49:05.429456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:01.572 [2024-10-07 05:49:05.429563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:02.140 05:49:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:02.140 05:49:05 -- common/autotest_common.sh@852 -- # return 0 00:30:02.140 05:49:05 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:30:02.140 I/O targets: 00:30:02.140 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:30:02.140 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:30:02.140 00:30:02.140 00:30:02.140 CUnit - A unit testing framework for C - Version 2.1-3 00:30:02.140 http://cunit.sourceforge.net/ 00:30:02.140 00:30:02.140 00:30:02.140 Suite: bdevio tests on: Nvme0n1p2 00:30:02.140 Test: blockdev write read block ...passed 00:30:02.140 Test: blockdev write zeroes read block ...passed 00:30:02.140 Test: blockdev write zeroes read no split ...passed 00:30:02.140 Test: blockdev write zeroes read split ...passed 00:30:02.140 Test: blockdev write zeroes read split partial ...passed 00:30:02.140 Test: blockdev reset ...[2024-10-07 05:49:06.045237] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:30:02.140 passed 00:30:02.140 Test: blockdev write read 8 blocks ...[2024-10-07 05:49:06.048574] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:02.140 passed 00:30:02.140 Test: blockdev write read size > 128k ...passed 00:30:02.140 Test: blockdev write read invalid size ...passed 00:30:02.140 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:02.140 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:02.140 Test: blockdev write read max offset ...passed 00:30:02.140 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:02.140 Test: blockdev writev readv 8 blocks ...passed 00:30:02.140 Test: blockdev writev readv 30 x 1block ...passed 00:30:02.140 Test: blockdev writev readv block ...passed 00:30:02.140 Test: blockdev writev readv size > 128k ...passed 00:30:02.140 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:02.140 Test: blockdev comparev and writev ...[2024-10-07 05:49:06.057944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0xc00b000 len:0x1000 00:30:02.140 [2024-10-07 05:49:06.058051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:30:02.140 passed 00:30:02.140 Test: blockdev nvme passthru rw ...passed 00:30:02.140 Test: blockdev nvme passthru vendor specific ...passed 00:30:02.140 Test: blockdev nvme admin passthru ...passed 00:30:02.140 Test: blockdev copy ...passed 00:30:02.140 Suite: bdevio tests on: Nvme0n1p1 00:30:02.140 Test: blockdev write read block ...passed 00:30:02.140 Test: blockdev write zeroes read block ...passed 00:30:02.140 Test: blockdev write zeroes read no split ...passed 00:30:02.140 Test: blockdev write zeroes read split ...passed 00:30:02.140 Test: blockdev write zeroes read split partial ...passed 00:30:02.140 Test: blockdev reset ...[2024-10-07 05:49:06.102840] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:30:02.140 passed 00:30:02.140 Test: blockdev write read 8 blocks ...[2024-10-07 05:49:06.106103] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:02.140 passed 00:30:02.140 Test: blockdev write read size > 128k ...passed 00:30:02.140 Test: blockdev write read invalid size ...passed 00:30:02.140 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:02.140 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:02.141 Test: blockdev write read max offset ...passed 00:30:02.141 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:02.141 Test: blockdev writev readv 8 blocks ...passed 00:30:02.141 Test: blockdev writev readv 30 x 1block ...passed 00:30:02.141 Test: blockdev writev readv block ...passed 00:30:02.141 Test: blockdev writev readv size > 128k ...passed 00:30:02.141 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:02.141 Test: blockdev comparev and writev ...[2024-10-07 05:49:06.114524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0xc00d000 len:0x1000 00:30:02.141 [2024-10-07 05:49:06.114592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:30:02.141 passed 00:30:02.141 Test: blockdev nvme passthru rw ...passed 00:30:02.141 Test: blockdev nvme passthru vendor specific ...passed 00:30:02.141 Test: blockdev nvme admin passthru ...passed 00:30:02.141 Test: blockdev copy ...passed 00:30:02.141 00:30:02.141 Run Summary: Type Total Ran Passed Failed Inactive 00:30:02.141 suites 2 2 n/a 0 0 00:30:02.141 tests 46 46 46 0 0 00:30:02.141 asserts 284 284 284 0 n/a 00:30:02.141 00:30:02.141 Elapsed time = 0.319 seconds 00:30:02.141 0 00:30:02.400 05:49:06 -- bdev/blockdev.sh@293 -- # killprocess 181557 00:30:02.400 05:49:06 -- common/autotest_common.sh@926 -- # '[' -z 181557 ']' 00:30:02.400 05:49:06 -- common/autotest_common.sh@930 -- # kill -0 181557 00:30:02.400 05:49:06 -- common/autotest_common.sh@931 -- # uname 00:30:02.400 05:49:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:02.400 05:49:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 181557 00:30:02.400 05:49:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:02.400 05:49:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:02.400 killing process with pid 181557 00:30:02.400 05:49:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 181557' 00:30:02.400 05:49:06 -- common/autotest_common.sh@945 -- # kill 181557 00:30:02.400 05:49:06 -- common/autotest_common.sh@950 -- # wait 181557 00:30:03.337 05:49:07 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:30:03.337 00:30:03.337 real 0m2.169s 00:30:03.337 user 0m4.987s 00:30:03.337 sys 0m0.313s 00:30:03.337 05:49:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:03.337 05:49:07 -- common/autotest_common.sh@10 -- # set +x 00:30:03.337 ************************************ 00:30:03.337 END TEST bdev_bounds 00:30:03.337 ************************************ 00:30:03.337 05:49:07 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:30:03.337 05:49:07 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:30:03.337 05:49:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:03.337 05:49:07 -- common/autotest_common.sh@10 -- # set +x 00:30:03.337 ************************************ 00:30:03.337 START TEST bdev_nbd 00:30:03.337 ************************************ 00:30:03.337 05:49:07 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:30:03.337 05:49:07 -- bdev/blockdev.sh@298 -- # uname -s 00:30:03.337 05:49:07 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:30:03.337 05:49:07 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:03.337 05:49:07 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:03.337 05:49:07 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2') 00:30:03.337 05:49:07 -- bdev/blockdev.sh@302 -- # local bdev_all 00:30:03.337 05:49:07 -- bdev/blockdev.sh@303 -- # local bdev_num=2 00:30:03.337 05:49:07 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:30:03.337 05:49:07 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:30:03.337 05:49:07 -- bdev/blockdev.sh@309 -- # local nbd_all 00:30:03.337 05:49:07 -- bdev/blockdev.sh@310 -- # bdev_num=2 00:30:03.337 05:49:07 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:03.337 05:49:07 -- bdev/blockdev.sh@312 -- # local nbd_list 00:30:03.337 05:49:07 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:30:03.337 05:49:07 -- bdev/blockdev.sh@313 -- # local bdev_list 00:30:03.337 05:49:07 -- bdev/blockdev.sh@316 -- # nbd_pid=181622 00:30:03.337 05:49:07 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:30:03.337 05:49:07 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:30:03.337 05:49:07 -- bdev/blockdev.sh@318 -- # waitforlisten 181622 /var/tmp/spdk-nbd.sock 00:30:03.337 05:49:07 -- common/autotest_common.sh@819 -- # '[' -z 181622 ']' 00:30:03.337 05:49:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:30:03.337 05:49:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:03.337 05:49:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:30:03.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:30:03.337 05:49:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:03.337 05:49:07 -- common/autotest_common.sh@10 -- # set +x 00:30:03.337 [2024-10-07 05:49:07.302176] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:03.337 [2024-10-07 05:49:07.302401] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:03.611 [2024-10-07 05:49:07.478660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.884 [2024-10-07 05:49:07.668412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.451 05:49:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:04.451 05:49:08 -- common/autotest_common.sh@852 -- # return 0 00:30:04.451 05:49:08 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:30:04.451 05:49:08 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:04.451 05:49:08 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:30:04.451 05:49:08 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:30:04.451 05:49:08 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:30:04.451 05:49:08 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:04.451 05:49:08 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:30:04.451 05:49:08 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:30:04.451 05:49:08 -- bdev/nbd_common.sh@24 -- # local i 00:30:04.451 05:49:08 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:30:04.451 05:49:08 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:30:04.451 05:49:08 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:30:04.451 05:49:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:30:04.708 05:49:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:30:04.708 05:49:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:30:04.708 05:49:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:30:04.708 05:49:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:30:04.708 05:49:08 -- common/autotest_common.sh@857 -- # local i 00:30:04.708 05:49:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:30:04.708 05:49:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:30:04.708 05:49:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:30:04.708 05:49:08 -- common/autotest_common.sh@861 -- # break 00:30:04.708 05:49:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:30:04.708 05:49:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:30:04.708 05:49:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:04.708 1+0 records in 00:30:04.708 1+0 records out 00:30:04.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000739977 s, 5.5 MB/s 00:30:04.708 05:49:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:04.708 05:49:08 -- common/autotest_common.sh@874 -- # size=4096 00:30:04.708 05:49:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:04.709 05:49:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:30:04.709 05:49:08 -- common/autotest_common.sh@877 -- # return 0 00:30:04.709 05:49:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:04.709 05:49:08 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:30:04.709 05:49:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:30:04.709 05:49:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:30:04.709 05:49:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:30:04.709 05:49:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:30:04.709 05:49:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:30:04.709 05:49:08 -- common/autotest_common.sh@857 -- # local i 00:30:04.709 05:49:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:30:04.709 05:49:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:30:04.709 05:49:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:30:04.709 05:49:08 -- common/autotest_common.sh@861 -- # break 00:30:04.709 05:49:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:30:04.709 05:49:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:30:04.709 05:49:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:04.709 1+0 records in 00:30:04.709 1+0 records out 00:30:04.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526385 s, 7.8 MB/s 00:30:04.709 05:49:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:04.709 05:49:08 -- common/autotest_common.sh@874 -- # size=4096 00:30:04.709 05:49:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:04.709 05:49:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:30:04.967 05:49:08 -- common/autotest_common.sh@877 -- # return 0 00:30:04.967 05:49:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:04.967 05:49:08 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:30:04.967 05:49:08 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:04.967 05:49:08 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:30:04.967 { 00:30:04.967 "nbd_device": "/dev/nbd0", 00:30:04.967 "bdev_name": "Nvme0n1p1" 00:30:04.967 }, 00:30:04.967 { 00:30:04.967 "nbd_device": "/dev/nbd1", 00:30:04.967 "bdev_name": "Nvme0n1p2" 00:30:04.967 } 00:30:04.967 ]' 00:30:04.967 05:49:08 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:30:04.967 05:49:08 -- bdev/nbd_common.sh@119 -- # echo '[ 00:30:04.967 { 00:30:04.967 "nbd_device": "/dev/nbd0", 00:30:04.967 "bdev_name": "Nvme0n1p1" 00:30:04.967 }, 00:30:04.967 { 00:30:04.967 "nbd_device": "/dev/nbd1", 00:30:04.967 "bdev_name": "Nvme0n1p2" 00:30:04.967 } 00:30:04.967 ]' 00:30:04.967 05:49:08 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:30:05.226 05:49:08 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:30:05.226 05:49:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:05.226 05:49:08 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:05.226 05:49:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:05.226 05:49:08 -- bdev/nbd_common.sh@51 -- # local i 00:30:05.226 05:49:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:05.226 05:49:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:05.485 05:49:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:05.485 05:49:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:05.485 05:49:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:05.485 05:49:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:05.485 05:49:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:05.485 05:49:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:05.485 05:49:09 -- bdev/nbd_common.sh@41 -- # break 00:30:05.485 05:49:09 -- bdev/nbd_common.sh@45 -- # return 0 00:30:05.485 05:49:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:05.485 05:49:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:05.485 05:49:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:05.485 05:49:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:05.485 05:49:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:05.485 05:49:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:05.485 05:49:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:05.485 05:49:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:05.485 05:49:09 -- bdev/nbd_common.sh@41 -- # break 00:30:05.485 05:49:09 -- bdev/nbd_common.sh@45 -- # return 0 00:30:05.485 05:49:09 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:05.485 05:49:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:05.485 05:49:09 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:05.744 05:49:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:05.744 05:49:09 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:05.744 05:49:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:06.003 05:49:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:06.003 05:49:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:06.003 05:49:09 -- bdev/nbd_common.sh@65 -- # echo '' 00:30:06.003 05:49:09 -- bdev/nbd_common.sh@65 -- # true 00:30:06.003 05:49:09 -- bdev/nbd_common.sh@65 -- # count=0 00:30:06.003 05:49:09 -- bdev/nbd_common.sh@66 -- # echo 0 00:30:06.003 05:49:09 -- bdev/nbd_common.sh@122 -- # count=0 00:30:06.003 05:49:09 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:30:06.003 05:49:09 -- bdev/nbd_common.sh@127 -- # return 0 00:30:06.003 05:49:09 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:30:06.003 05:49:09 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:06.003 05:49:09 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:30:06.003 05:49:09 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:30:06.003 05:49:09 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:06.003 05:49:09 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:30:06.003 05:49:09 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:30:06.003 05:49:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:06.003 05:49:09 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:30:06.003 05:49:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:06.003 05:49:09 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:06.003 05:49:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:06.003 05:49:09 -- bdev/nbd_common.sh@12 -- # local i 00:30:06.003 05:49:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:06.003 05:49:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:06.003 05:49:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:30:06.003 /dev/nbd0 00:30:06.003 05:49:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:06.003 05:49:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:06.003 05:49:09 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:30:06.003 05:49:09 -- common/autotest_common.sh@857 -- # local i 00:30:06.003 05:49:09 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:30:06.003 05:49:09 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:30:06.003 05:49:09 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:30:06.003 05:49:09 -- common/autotest_common.sh@861 -- # break 00:30:06.003 05:49:09 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:30:06.003 05:49:09 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:30:06.003 05:49:09 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:06.003 1+0 records in 00:30:06.003 1+0 records out 00:30:06.003 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000731856 s, 5.6 MB/s 00:30:06.003 05:49:09 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:06.003 05:49:09 -- common/autotest_common.sh@874 -- # size=4096 00:30:06.003 05:49:09 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:06.003 05:49:09 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:30:06.003 05:49:09 -- common/autotest_common.sh@877 -- # return 0 00:30:06.003 05:49:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:06.003 05:49:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:06.003 05:49:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:30:06.261 /dev/nbd1 00:30:06.261 05:49:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:06.261 05:49:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:06.261 05:49:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:30:06.262 05:49:10 -- common/autotest_common.sh@857 -- # local i 00:30:06.262 05:49:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:30:06.262 05:49:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:30:06.521 05:49:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:30:06.521 05:49:10 -- common/autotest_common.sh@861 -- # break 00:30:06.521 05:49:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:30:06.521 05:49:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:30:06.521 05:49:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:06.521 1+0 records in 00:30:06.521 1+0 records out 00:30:06.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000719084 s, 5.7 MB/s 00:30:06.521 05:49:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:06.521 05:49:10 -- common/autotest_common.sh@874 -- # size=4096 00:30:06.521 05:49:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:06.521 05:49:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:30:06.521 05:49:10 -- common/autotest_common.sh@877 -- # return 0 00:30:06.521 05:49:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:06.521 05:49:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:06.521 05:49:10 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:06.521 05:49:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:06.521 05:49:10 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:06.780 05:49:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:30:06.780 { 00:30:06.780 "nbd_device": "/dev/nbd0", 00:30:06.780 "bdev_name": "Nvme0n1p1" 00:30:06.780 }, 00:30:06.780 { 00:30:06.780 "nbd_device": "/dev/nbd1", 00:30:06.780 "bdev_name": "Nvme0n1p2" 00:30:06.780 } 00:30:06.780 ]' 00:30:06.780 05:49:10 -- bdev/nbd_common.sh@64 -- # echo '[ 00:30:06.780 { 00:30:06.780 "nbd_device": "/dev/nbd0", 00:30:06.780 "bdev_name": "Nvme0n1p1" 00:30:06.780 }, 00:30:06.780 { 00:30:06.780 "nbd_device": "/dev/nbd1", 00:30:06.780 "bdev_name": "Nvme0n1p2" 00:30:06.780 } 00:30:06.780 ]' 00:30:06.780 05:49:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:06.780 05:49:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:30:06.780 /dev/nbd1' 00:30:06.780 05:49:10 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:30:06.780 /dev/nbd1' 00:30:06.780 05:49:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:06.780 05:49:10 -- bdev/nbd_common.sh@65 -- # count=2 00:30:06.780 05:49:10 -- bdev/nbd_common.sh@66 -- # echo 2 00:30:06.780 05:49:10 -- bdev/nbd_common.sh@95 -- # count=2 00:30:06.780 05:49:10 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:30:06.780 05:49:10 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:30:06.780 05:49:10 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:06.780 05:49:10 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:06.780 05:49:10 -- bdev/nbd_common.sh@71 -- # local operation=write 00:30:06.780 05:49:10 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:06.780 05:49:10 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:30:06.780 05:49:10 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:30:06.780 256+0 records in 00:30:06.780 256+0 records out 00:30:06.780 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00806537 s, 130 MB/s 00:30:06.780 05:49:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:06.780 05:49:10 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:30:06.780 256+0 records in 00:30:06.780 256+0 records out 00:30:06.780 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.07453 s, 14.1 MB/s 00:30:06.780 05:49:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:06.780 05:49:10 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:30:06.780 256+0 records in 00:30:06.780 256+0 records out 00:30:06.780 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0832871 s, 12.6 MB/s 00:30:06.780 05:49:10 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:30:06.780 05:49:10 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:06.780 05:49:10 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:06.780 05:49:10 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:30:06.780 05:49:10 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:06.780 05:49:10 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:30:06.780 05:49:10 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:30:06.780 05:49:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:06.780 05:49:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:30:07.040 05:49:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:07.040 05:49:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:30:07.040 05:49:10 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:07.040 05:49:10 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:30:07.040 05:49:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:07.040 05:49:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:07.040 05:49:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:07.040 05:49:10 -- bdev/nbd_common.sh@51 -- # local i 00:30:07.040 05:49:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:07.040 05:49:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:07.040 05:49:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:07.040 05:49:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:07.040 05:49:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:07.040 05:49:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:07.040 05:49:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:07.040 05:49:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:07.040 05:49:10 -- bdev/nbd_common.sh@41 -- # break 00:30:07.040 05:49:10 -- bdev/nbd_common.sh@45 -- # return 0 00:30:07.040 05:49:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:07.040 05:49:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:07.299 05:49:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:07.299 05:49:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:07.299 05:49:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:07.299 05:49:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:07.299 05:49:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:07.299 05:49:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:07.299 05:49:11 -- bdev/nbd_common.sh@41 -- # break 00:30:07.299 05:49:11 -- bdev/nbd_common.sh@45 -- # return 0 00:30:07.299 05:49:11 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:07.299 05:49:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:07.299 05:49:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:07.558 05:49:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:07.558 05:49:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:07.558 05:49:11 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:07.558 05:49:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:07.558 05:49:11 -- bdev/nbd_common.sh@65 -- # echo '' 00:30:07.558 05:49:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:07.558 05:49:11 -- bdev/nbd_common.sh@65 -- # true 00:30:07.817 05:49:11 -- bdev/nbd_common.sh@65 -- # count=0 00:30:07.817 05:49:11 -- bdev/nbd_common.sh@66 -- # echo 0 00:30:07.817 05:49:11 -- bdev/nbd_common.sh@104 -- # count=0 00:30:07.817 05:49:11 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:30:07.817 05:49:11 -- bdev/nbd_common.sh@109 -- # return 0 00:30:07.817 05:49:11 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:30:07.817 05:49:11 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:07.817 05:49:11 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:07.817 05:49:11 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:30:07.817 05:49:11 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:30:07.817 05:49:11 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:30:08.076 malloc_lvol_verify 00:30:08.076 05:49:11 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:30:08.076 7a846e62-dae2-4025-92d6-8853f5f10189 00:30:08.076 05:49:12 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:30:08.335 b319ba58-080e-44c8-afa0-999976e5e23e 00:30:08.335 05:49:12 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:30:08.594 /dev/nbd0 00:30:08.594 05:49:12 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:30:08.594 mke2fs 1.46.5 (30-Dec-2021) 00:30:08.594 00:30:08.594 Filesystem too small for a journal 00:30:08.594 Discarding device blocks: 0/1024 done 00:30:08.594 Creating filesystem with 1024 4k blocks and 1024 inodes 00:30:08.594 00:30:08.594 Allocating group tables: 0/1 done 00:30:08.594 Writing inode tables: 0/1 done 00:30:08.594 Writing superblocks and filesystem accounting information: 0/1 done 00:30:08.594 00:30:08.594 05:49:12 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:30:08.594 05:49:12 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:30:08.594 05:49:12 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:08.594 05:49:12 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:08.594 05:49:12 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:08.594 05:49:12 -- bdev/nbd_common.sh@51 -- # local i 00:30:08.594 05:49:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:08.594 05:49:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:08.854 05:49:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:08.854 05:49:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:08.854 05:49:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:08.854 05:49:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:08.854 05:49:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:08.854 05:49:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:08.854 05:49:12 -- bdev/nbd_common.sh@41 -- # break 00:30:08.854 05:49:12 -- bdev/nbd_common.sh@45 -- # return 0 00:30:08.854 05:49:12 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:30:08.854 05:49:12 -- bdev/nbd_common.sh@147 -- # return 0 00:30:08.854 05:49:12 -- bdev/blockdev.sh@324 -- # killprocess 181622 00:30:08.854 05:49:12 -- common/autotest_common.sh@926 -- # '[' -z 181622 ']' 00:30:08.854 05:49:12 -- common/autotest_common.sh@930 -- # kill -0 181622 00:30:08.854 05:49:12 -- common/autotest_common.sh@931 -- # uname 00:30:08.854 05:49:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:08.854 05:49:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 181622 00:30:08.854 05:49:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:08.854 05:49:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:08.854 killing process with pid 181622 00:30:08.854 05:49:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 181622' 00:30:08.854 05:49:12 -- common/autotest_common.sh@945 -- # kill 181622 00:30:08.854 05:49:12 -- common/autotest_common.sh@950 -- # wait 181622 00:30:10.234 05:49:13 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:30:10.234 00:30:10.234 real 0m6.553s 00:30:10.234 user 0m9.451s 00:30:10.234 sys 0m1.555s 00:30:10.234 05:49:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:10.234 05:49:13 -- common/autotest_common.sh@10 -- # set +x 00:30:10.234 ************************************ 00:30:10.234 END TEST bdev_nbd 00:30:10.234 ************************************ 00:30:10.234 05:49:13 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:30:10.234 05:49:13 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:30:10.234 05:49:13 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:30:10.234 skipping fio tests on NVMe due to multi-ns failures. 00:30:10.234 05:49:13 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:30:10.234 05:49:13 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:10.234 05:49:13 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:10.234 05:49:13 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:30:10.234 05:49:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:10.234 05:49:13 -- common/autotest_common.sh@10 -- # set +x 00:30:10.234 ************************************ 00:30:10.234 START TEST bdev_verify 00:30:10.234 ************************************ 00:30:10.234 05:49:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:10.234 [2024-10-07 05:49:13.903911] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:10.234 [2024-10-07 05:49:13.904118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181873 ] 00:30:10.234 [2024-10-07 05:49:14.076955] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:10.494 [2024-10-07 05:49:14.276337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:10.494 [2024-10-07 05:49:14.276348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.753 Running I/O for 5 seconds... 00:30:16.025 00:30:16.025 Latency(us) 00:30:16.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:16.025 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:16.025 Verification LBA range: start 0x0 length 0x4ff80 00:30:16.025 Nvme0n1p1 : 5.02 5443.99 21.27 0.00 0.00 23449.33 3604.48 21209.83 00:30:16.025 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:16.025 Verification LBA range: start 0x4ff80 length 0x4ff80 00:30:16.025 Nvme0n1p1 : 5.02 5435.13 21.23 0.00 0.00 23485.71 3187.43 27167.65 00:30:16.025 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:16.025 Verification LBA range: start 0x0 length 0x4ff7f 00:30:16.025 Nvme0n1p2 : 5.02 5440.15 21.25 0.00 0.00 23442.22 4468.36 21448.15 00:30:16.025 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:16.025 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:30:16.025 Nvme0n1p2 : 5.03 5437.99 21.24 0.00 0.00 23442.17 1236.25 23235.49 00:30:16.025 =================================================================================================================== 00:30:16.025 Total : 21757.26 84.99 0.00 0.00 23454.84 1236.25 27167.65 00:30:18.557 00:30:18.557 real 0m8.656s 00:30:18.557 user 0m16.084s 00:30:18.557 sys 0m0.323s 00:30:18.557 05:49:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:18.557 ************************************ 00:30:18.557 END TEST bdev_verify 00:30:18.557 05:49:22 -- common/autotest_common.sh@10 -- # set +x 00:30:18.557 ************************************ 00:30:18.816 05:49:22 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:18.816 05:49:22 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:30:18.816 05:49:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:18.816 05:49:22 -- common/autotest_common.sh@10 -- # set +x 00:30:18.816 ************************************ 00:30:18.816 START TEST bdev_verify_big_io 00:30:18.816 ************************************ 00:30:18.816 05:49:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:18.816 [2024-10-07 05:49:22.609453] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:18.816 [2024-10-07 05:49:22.609606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181993 ] 00:30:18.816 [2024-10-07 05:49:22.768488] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:19.076 [2024-10-07 05:49:22.958689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.076 [2024-10-07 05:49:22.958705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.644 Running I/O for 5 seconds... 00:30:24.920 00:30:24.920 Latency(us) 00:30:24.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:24.920 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:24.920 Verification LBA range: start 0x0 length 0x4ff8 00:30:24.920 Nvme0n1p1 : 5.07 1401.40 87.59 0.00 0.00 90536.32 2844.86 128688.87 00:30:24.920 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:24.920 Verification LBA range: start 0x4ff8 length 0x4ff8 00:30:24.920 Nvme0n1p1 : 5.08 1185.37 74.09 0.00 0.00 106919.07 2204.39 167772.16 00:30:24.920 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:24.920 Verification LBA range: start 0x0 length 0x4ff7 00:30:24.920 Nvme0n1p2 : 5.07 1409.58 88.10 0.00 0.00 89444.31 808.03 100567.97 00:30:24.920 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:24.920 Verification LBA range: start 0x4ff7 length 0x4ff7 00:30:24.920 Nvme0n1p2 : 5.08 1192.13 74.51 0.00 0.00 105213.07 919.74 124875.87 00:30:24.920 =================================================================================================================== 00:30:24.920 Total : 5188.47 324.28 0.00 0.00 97362.84 808.03 167772.16 00:30:26.299 00:30:26.299 real 0m7.520s 00:30:26.299 user 0m13.878s 00:30:26.299 sys 0m0.287s 00:30:26.299 05:49:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:26.299 ************************************ 00:30:26.299 END TEST bdev_verify_big_io 00:30:26.299 ************************************ 00:30:26.299 05:49:30 -- common/autotest_common.sh@10 -- # set +x 00:30:26.299 05:49:30 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:26.299 05:49:30 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:30:26.299 05:49:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:26.299 05:49:30 -- common/autotest_common.sh@10 -- # set +x 00:30:26.299 ************************************ 00:30:26.299 START TEST bdev_write_zeroes 00:30:26.299 ************************************ 00:30:26.299 05:49:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:26.299 [2024-10-07 05:49:30.200575] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:26.299 [2024-10-07 05:49:30.200999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182101 ] 00:30:26.558 [2024-10-07 05:49:30.372361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.818 [2024-10-07 05:49:30.569555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.076 Running I/O for 1 seconds... 00:30:28.037 00:30:28.037 Latency(us) 00:30:28.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:28.037 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:28.037 Nvme0n1p1 : 1.01 26316.36 102.80 0.00 0.00 4852.93 2383.13 12809.31 00:30:28.037 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:28.037 Nvme0n1p2 : 1.01 26261.63 102.58 0.00 0.00 4854.88 2651.23 12690.15 00:30:28.037 =================================================================================================================== 00:30:28.037 Total : 52577.99 205.38 0.00 0.00 4853.90 2383.13 12809.31 00:30:29.414 00:30:29.414 real 0m3.063s 00:30:29.414 user 0m2.670s 00:30:29.414 sys 0m0.293s 00:30:29.414 05:49:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:29.414 05:49:33 -- common/autotest_common.sh@10 -- # set +x 00:30:29.414 ************************************ 00:30:29.414 END TEST bdev_write_zeroes 00:30:29.414 ************************************ 00:30:29.414 05:49:33 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:29.414 05:49:33 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:30:29.414 05:49:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:29.414 05:49:33 -- common/autotest_common.sh@10 -- # set +x 00:30:29.414 ************************************ 00:30:29.414 START TEST bdev_json_nonenclosed 00:30:29.414 ************************************ 00:30:29.414 05:49:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:29.414 [2024-10-07 05:49:33.326242] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:29.414 [2024-10-07 05:49:33.326642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182158 ] 00:30:29.673 [2024-10-07 05:49:33.496657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.932 [2024-10-07 05:49:33.685251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:29.932 [2024-10-07 05:49:33.685476] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:30:29.932 [2024-10-07 05:49:33.685516] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:30.191 00:30:30.191 real 0m0.779s 00:30:30.191 user 0m0.547s 00:30:30.191 sys 0m0.132s 00:30:30.191 05:49:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:30.191 05:49:34 -- common/autotest_common.sh@10 -- # set +x 00:30:30.191 ************************************ 00:30:30.191 END TEST bdev_json_nonenclosed 00:30:30.191 ************************************ 00:30:30.191 05:49:34 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:30.191 05:49:34 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:30:30.191 05:49:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:30.191 05:49:34 -- common/autotest_common.sh@10 -- # set +x 00:30:30.191 ************************************ 00:30:30.191 START TEST bdev_json_nonarray 00:30:30.191 ************************************ 00:30:30.191 05:49:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:30.191 [2024-10-07 05:49:34.145047] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:30.191 [2024-10-07 05:49:34.145204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182196 ] 00:30:30.451 [2024-10-07 05:49:34.301234] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.710 [2024-10-07 05:49:34.491362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.710 [2024-10-07 05:49:34.491598] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:30:30.710 [2024-10-07 05:49:34.491643] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:30.970 00:30:30.970 real 0m0.750s 00:30:30.970 user 0m0.498s 00:30:30.970 sys 0m0.152s 00:30:30.970 05:49:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:30.970 05:49:34 -- common/autotest_common.sh@10 -- # set +x 00:30:30.970 ************************************ 00:30:30.970 END TEST bdev_json_nonarray 00:30:30.970 ************************************ 00:30:30.970 05:49:34 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:30:30.970 05:49:34 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:30:30.970 05:49:34 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:30:30.970 05:49:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:30.970 05:49:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:30.970 05:49:34 -- common/autotest_common.sh@10 -- # set +x 00:30:30.970 ************************************ 00:30:30.970 START TEST bdev_gpt_uuid 00:30:30.970 ************************************ 00:30:30.970 05:49:34 -- common/autotest_common.sh@1104 -- # bdev_gpt_uuid 00:30:30.970 05:49:34 -- bdev/blockdev.sh@612 -- # local bdev 00:30:30.970 05:49:34 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:30:30.970 05:49:34 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=182234 00:30:30.970 05:49:34 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:30:30.970 05:49:34 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:30:30.970 05:49:34 -- bdev/blockdev.sh@47 -- # waitforlisten 182234 00:30:30.970 05:49:34 -- common/autotest_common.sh@819 -- # '[' -z 182234 ']' 00:30:30.970 05:49:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:30.970 05:49:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:30.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:30.970 05:49:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:30.970 05:49:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:30.970 05:49:34 -- common/autotest_common.sh@10 -- # set +x 00:30:31.229 [2024-10-07 05:49:34.972946] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:31.229 [2024-10-07 05:49:34.973169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182234 ] 00:30:31.229 [2024-10-07 05:49:35.144256] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.487 [2024-10-07 05:49:35.334106] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:31.487 [2024-10-07 05:49:35.334351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.864 05:49:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:32.864 05:49:36 -- common/autotest_common.sh@852 -- # return 0 00:30:32.864 05:49:36 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:32.864 05:49:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:32.864 05:49:36 -- common/autotest_common.sh@10 -- # set +x 00:30:32.864 Some configs were skipped because the RPC state that can call them passed over. 00:30:32.864 05:49:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:32.864 05:49:36 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:30:32.864 05:49:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:32.864 05:49:36 -- common/autotest_common.sh@10 -- # set +x 00:30:32.864 05:49:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:32.864 05:49:36 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:30:32.864 05:49:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:32.864 05:49:36 -- common/autotest_common.sh@10 -- # set +x 00:30:32.864 05:49:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:32.864 05:49:36 -- bdev/blockdev.sh@619 -- # bdev='[ 00:30:32.864 { 00:30:32.864 "name": "Nvme0n1p1", 00:30:32.864 "aliases": [ 00:30:32.864 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:30:32.864 ], 00:30:32.864 "product_name": "GPT Disk", 00:30:32.864 "block_size": 4096, 00:30:32.864 "num_blocks": 655104, 00:30:32.864 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:30:32.864 "assigned_rate_limits": { 00:30:32.864 "rw_ios_per_sec": 0, 00:30:32.864 "rw_mbytes_per_sec": 0, 00:30:32.864 "r_mbytes_per_sec": 0, 00:30:32.864 "w_mbytes_per_sec": 0 00:30:32.864 }, 00:30:32.864 "claimed": false, 00:30:32.864 "zoned": false, 00:30:32.864 "supported_io_types": { 00:30:32.864 "read": true, 00:30:32.864 "write": true, 00:30:32.864 "unmap": true, 00:30:32.864 "write_zeroes": true, 00:30:32.864 "flush": true, 00:30:32.864 "reset": true, 00:30:32.864 "compare": true, 00:30:32.864 "compare_and_write": false, 00:30:32.864 "abort": true, 00:30:32.864 "nvme_admin": false, 00:30:32.864 "nvme_io": false 00:30:32.864 }, 00:30:32.864 "driver_specific": { 00:30:32.864 "gpt": { 00:30:32.864 "base_bdev": "Nvme0n1", 00:30:32.864 "offset_blocks": 256, 00:30:32.864 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:30:32.864 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:30:32.864 "partition_name": "SPDK_TEST_first" 00:30:32.864 } 00:30:32.864 } 00:30:32.864 } 00:30:32.864 ]' 00:30:32.864 05:49:36 -- bdev/blockdev.sh@620 -- # jq -r length 00:30:32.864 05:49:36 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:30:32.864 05:49:36 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:30:33.124 05:49:36 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:30:33.124 05:49:36 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:30:33.124 05:49:36 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:30:33.124 05:49:36 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:30:33.124 05:49:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:33.124 05:49:36 -- common/autotest_common.sh@10 -- # set +x 00:30:33.124 05:49:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:33.124 05:49:36 -- bdev/blockdev.sh@624 -- # bdev='[ 00:30:33.124 { 00:30:33.124 "name": "Nvme0n1p2", 00:30:33.124 "aliases": [ 00:30:33.124 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:30:33.124 ], 00:30:33.124 "product_name": "GPT Disk", 00:30:33.124 "block_size": 4096, 00:30:33.124 "num_blocks": 655103, 00:30:33.124 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:30:33.124 "assigned_rate_limits": { 00:30:33.124 "rw_ios_per_sec": 0, 00:30:33.124 "rw_mbytes_per_sec": 0, 00:30:33.124 "r_mbytes_per_sec": 0, 00:30:33.124 "w_mbytes_per_sec": 0 00:30:33.124 }, 00:30:33.124 "claimed": false, 00:30:33.124 "zoned": false, 00:30:33.124 "supported_io_types": { 00:30:33.124 "read": true, 00:30:33.124 "write": true, 00:30:33.124 "unmap": true, 00:30:33.124 "write_zeroes": true, 00:30:33.124 "flush": true, 00:30:33.124 "reset": true, 00:30:33.124 "compare": true, 00:30:33.124 "compare_and_write": false, 00:30:33.124 "abort": true, 00:30:33.124 "nvme_admin": false, 00:30:33.124 "nvme_io": false 00:30:33.124 }, 00:30:33.124 "driver_specific": { 00:30:33.124 "gpt": { 00:30:33.124 "base_bdev": "Nvme0n1", 00:30:33.124 "offset_blocks": 655360, 00:30:33.124 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:30:33.124 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:30:33.124 "partition_name": "SPDK_TEST_second" 00:30:33.124 } 00:30:33.124 } 00:30:33.124 } 00:30:33.124 ]' 00:30:33.124 05:49:36 -- bdev/blockdev.sh@625 -- # jq -r length 00:30:33.124 05:49:36 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:30:33.124 05:49:36 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:30:33.124 05:49:37 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:30:33.124 05:49:37 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:30:33.124 05:49:37 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:30:33.124 05:49:37 -- bdev/blockdev.sh@629 -- # killprocess 182234 00:30:33.124 05:49:37 -- common/autotest_common.sh@926 -- # '[' -z 182234 ']' 00:30:33.124 05:49:37 -- common/autotest_common.sh@930 -- # kill -0 182234 00:30:33.124 05:49:37 -- common/autotest_common.sh@931 -- # uname 00:30:33.124 05:49:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:33.124 05:49:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 182234 00:30:33.124 05:49:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:33.124 05:49:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:33.124 killing process with pid 182234 00:30:33.124 05:49:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 182234' 00:30:33.124 05:49:37 -- common/autotest_common.sh@945 -- # kill 182234 00:30:33.124 05:49:37 -- common/autotest_common.sh@950 -- # wait 182234 00:30:35.661 00:30:35.661 real 0m4.145s 00:30:35.661 user 0m4.477s 00:30:35.661 sys 0m0.610s 00:30:35.661 05:49:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:35.661 05:49:39 -- common/autotest_common.sh@10 -- # set +x 00:30:35.661 ************************************ 00:30:35.661 END TEST bdev_gpt_uuid 00:30:35.661 ************************************ 00:30:35.661 05:49:39 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:30:35.661 05:49:39 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:30:35.661 05:49:39 -- bdev/blockdev.sh@809 -- # cleanup 00:30:35.661 05:49:39 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:30:35.661 05:49:39 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:35.661 05:49:39 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:30:35.661 05:49:39 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:30:35.661 05:49:39 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:30:35.661 05:49:39 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:35.661 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:30:35.661 Waiting for block devices as requested 00:30:35.661 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:30:35.661 05:49:39 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme0n1 ]] 00:30:35.661 05:49:39 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme0n1 00:30:35.661 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:30:35.661 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:30:35.661 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:30:35.661 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:30:35.661 05:49:39 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:30:35.661 00:30:35.661 real 0m44.623s 00:30:35.661 user 1m3.225s 00:30:35.661 sys 0m6.449s 00:30:35.661 05:49:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:35.661 ************************************ 00:30:35.661 END TEST blockdev_nvme_gpt 00:30:35.661 05:49:39 -- common/autotest_common.sh@10 -- # set +x 00:30:35.661 ************************************ 00:30:35.661 05:49:39 -- spdk/autotest.sh@222 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:30:35.661 05:49:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:35.661 05:49:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:35.661 05:49:39 -- common/autotest_common.sh@10 -- # set +x 00:30:35.920 ************************************ 00:30:35.920 START TEST nvme 00:30:35.920 ************************************ 00:30:35.920 05:49:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:30:35.920 * Looking for test storage... 00:30:35.920 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:30:35.920 05:49:39 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:36.178 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:30:36.438 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:30:37.817 05:49:41 -- nvme/nvme.sh@79 -- # uname 00:30:37.817 05:49:41 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:30:37.817 05:49:41 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:30:37.817 05:49:41 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:30:37.817 05:49:41 -- common/autotest_common.sh@1058 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:30:37.817 05:49:41 -- common/autotest_common.sh@1044 -- # _randomize_va_space=2 00:30:37.817 05:49:41 -- common/autotest_common.sh@1045 -- # echo 0 00:30:37.817 05:49:41 -- common/autotest_common.sh@1047 -- # stubpid=182651 00:30:37.817 Waiting for stub to ready for secondary processes... 00:30:37.817 05:49:41 -- common/autotest_common.sh@1046 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:30:37.817 05:49:41 -- common/autotest_common.sh@1048 -- # echo Waiting for stub to ready for secondary processes... 00:30:37.817 05:49:41 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:37.817 05:49:41 -- common/autotest_common.sh@1051 -- # [[ -e /proc/182651 ]] 00:30:37.817 05:49:41 -- common/autotest_common.sh@1052 -- # sleep 1s 00:30:37.817 [2024-10-07 05:49:41.770145] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:37.817 [2024-10-07 05:49:41.770304] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:38.754 05:49:42 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:38.754 05:49:42 -- common/autotest_common.sh@1051 -- # [[ -e /proc/182651 ]] 00:30:38.754 05:49:42 -- common/autotest_common.sh@1052 -- # sleep 1s 00:30:39.323 [2024-10-07 05:49:43.194294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:39.581 [2024-10-07 05:49:43.393891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:39.581 [2024-10-07 05:49:43.393994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:39.581 [2024-10-07 05:49:43.394000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:39.581 [2024-10-07 05:49:43.408749] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:30:39.581 [2024-10-07 05:49:43.418265] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:30:39.581 [2024-10-07 05:49:43.419091] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:30:39.840 05:49:43 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:39.840 done. 00:30:39.840 05:49:43 -- common/autotest_common.sh@1054 -- # echo done. 00:30:39.840 05:49:43 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:30:39.840 05:49:43 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:30:39.840 05:49:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:39.840 05:49:43 -- common/autotest_common.sh@10 -- # set +x 00:30:39.840 ************************************ 00:30:39.840 START TEST nvme_reset 00:30:39.840 ************************************ 00:30:39.840 05:49:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:30:40.099 Initializing NVMe Controllers 00:30:40.099 Skipping QEMU NVMe SSD at 0000:00:06.0 00:30:40.099 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:30:40.099 00:30:40.099 real 0m0.319s 00:30:40.099 user 0m0.108s 00:30:40.099 sys 0m0.136s 00:30:40.099 05:49:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:40.099 05:49:44 -- common/autotest_common.sh@10 -- # set +x 00:30:40.099 ************************************ 00:30:40.099 END TEST nvme_reset 00:30:40.099 ************************************ 00:30:40.357 05:49:44 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:30:40.357 05:49:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:40.357 05:49:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:40.357 05:49:44 -- common/autotest_common.sh@10 -- # set +x 00:30:40.357 ************************************ 00:30:40.357 START TEST nvme_identify 00:30:40.357 ************************************ 00:30:40.357 05:49:44 -- common/autotest_common.sh@1104 -- # nvme_identify 00:30:40.357 05:49:44 -- nvme/nvme.sh@12 -- # bdfs=() 00:30:40.357 05:49:44 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:30:40.358 05:49:44 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:30:40.358 05:49:44 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:30:40.358 05:49:44 -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:40.358 05:49:44 -- common/autotest_common.sh@1498 -- # local bdfs 00:30:40.358 05:49:44 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:40.358 05:49:44 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:40.358 05:49:44 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:40.358 05:49:44 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:30:40.358 05:49:44 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:30:40.358 05:49:44 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:30:40.617 [2024-10-07 05:49:44.391952] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 182690 terminated unexpected 00:30:40.617 ===================================================== 00:30:40.617 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:40.617 ===================================================== 00:30:40.617 Controller Capabilities/Features 00:30:40.617 ================================ 00:30:40.617 Vendor ID: 1b36 00:30:40.617 Subsystem Vendor ID: 1af4 00:30:40.617 Serial Number: 12340 00:30:40.617 Model Number: QEMU NVMe Ctrl 00:30:40.617 Firmware Version: 8.0.0 00:30:40.617 Recommended Arb Burst: 6 00:30:40.617 IEEE OUI Identifier: 00 54 52 00:30:40.617 Multi-path I/O 00:30:40.617 May have multiple subsystem ports: No 00:30:40.617 May have multiple controllers: No 00:30:40.617 Associated with SR-IOV VF: No 00:30:40.617 Max Data Transfer Size: 524288 00:30:40.617 Max Number of Namespaces: 256 00:30:40.617 Max Number of I/O Queues: 64 00:30:40.617 NVMe Specification Version (VS): 1.4 00:30:40.617 NVMe Specification Version (Identify): 1.4 00:30:40.617 Maximum Queue Entries: 2048 00:30:40.617 Contiguous Queues Required: Yes 00:30:40.617 Arbitration Mechanisms Supported 00:30:40.617 Weighted Round Robin: Not Supported 00:30:40.617 Vendor Specific: Not Supported 00:30:40.617 Reset Timeout: 7500 ms 00:30:40.617 Doorbell Stride: 4 bytes 00:30:40.617 NVM Subsystem Reset: Not Supported 00:30:40.617 Command Sets Supported 00:30:40.617 NVM Command Set: Supported 00:30:40.618 Boot Partition: Not Supported 00:30:40.618 Memory Page Size Minimum: 4096 bytes 00:30:40.618 Memory Page Size Maximum: 65536 bytes 00:30:40.618 Persistent Memory Region: Not Supported 00:30:40.618 Optional Asynchronous Events Supported 00:30:40.618 Namespace Attribute Notices: Supported 00:30:40.618 Firmware Activation Notices: Not Supported 00:30:40.618 ANA Change Notices: Not Supported 00:30:40.618 PLE Aggregate Log Change Notices: Not Supported 00:30:40.618 LBA Status Info Alert Notices: Not Supported 00:30:40.618 EGE Aggregate Log Change Notices: Not Supported 00:30:40.618 Normal NVM Subsystem Shutdown event: Not Supported 00:30:40.618 Zone Descriptor Change Notices: Not Supported 00:30:40.618 Discovery Log Change Notices: Not Supported 00:30:40.618 Controller Attributes 00:30:40.618 128-bit Host Identifier: Not Supported 00:30:40.618 Non-Operational Permissive Mode: Not Supported 00:30:40.618 NVM Sets: Not Supported 00:30:40.618 Read Recovery Levels: Not Supported 00:30:40.618 Endurance Groups: Not Supported 00:30:40.618 Predictable Latency Mode: Not Supported 00:30:40.618 Traffic Based Keep ALive: Not Supported 00:30:40.618 Namespace Granularity: Not Supported 00:30:40.618 SQ Associations: Not Supported 00:30:40.618 UUID List: Not Supported 00:30:40.618 Multi-Domain Subsystem: Not Supported 00:30:40.618 Fixed Capacity Management: Not Supported 00:30:40.618 Variable Capacity Management: Not Supported 00:30:40.618 Delete Endurance Group: Not Supported 00:30:40.618 Delete NVM Set: Not Supported 00:30:40.618 Extended LBA Formats Supported: Supported 00:30:40.618 Flexible Data Placement Supported: Not Supported 00:30:40.618 00:30:40.618 Controller Memory Buffer Support 00:30:40.618 ================================ 00:30:40.618 Supported: No 00:30:40.618 00:30:40.618 Persistent Memory Region Support 00:30:40.618 ================================ 00:30:40.618 Supported: No 00:30:40.618 00:30:40.618 Admin Command Set Attributes 00:30:40.618 ============================ 00:30:40.618 Security Send/Receive: Not Supported 00:30:40.618 Format NVM: Supported 00:30:40.618 Firmware Activate/Download: Not Supported 00:30:40.618 Namespace Management: Supported 00:30:40.618 Device Self-Test: Not Supported 00:30:40.618 Directives: Supported 00:30:40.618 NVMe-MI: Not Supported 00:30:40.618 Virtualization Management: Not Supported 00:30:40.618 Doorbell Buffer Config: Supported 00:30:40.618 Get LBA Status Capability: Not Supported 00:30:40.618 Command & Feature Lockdown Capability: Not Supported 00:30:40.618 Abort Command Limit: 4 00:30:40.618 Async Event Request Limit: 4 00:30:40.618 Number of Firmware Slots: N/A 00:30:40.618 Firmware Slot 1 Read-Only: N/A 00:30:40.618 Firmware Activation Without Reset: N/A 00:30:40.618 Multiple Update Detection Support: N/A 00:30:40.618 Firmware Update Granularity: No Information Provided 00:30:40.618 Per-Namespace SMART Log: Yes 00:30:40.618 Asymmetric Namespace Access Log Page: Not Supported 00:30:40.618 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:30:40.618 Command Effects Log Page: Supported 00:30:40.618 Get Log Page Extended Data: Supported 00:30:40.618 Telemetry Log Pages: Not Supported 00:30:40.618 Persistent Event Log Pages: Not Supported 00:30:40.618 Supported Log Pages Log Page: May Support 00:30:40.618 Commands Supported & Effects Log Page: Not Supported 00:30:40.618 Feature Identifiers & Effects Log Page:May Support 00:30:40.618 NVMe-MI Commands & Effects Log Page: May Support 00:30:40.618 Data Area 4 for Telemetry Log: Not Supported 00:30:40.618 Error Log Page Entries Supported: 1 00:30:40.618 Keep Alive: Not Supported 00:30:40.618 00:30:40.618 NVM Command Set Attributes 00:30:40.618 ========================== 00:30:40.618 Submission Queue Entry Size 00:30:40.618 Max: 64 00:30:40.618 Min: 64 00:30:40.618 Completion Queue Entry Size 00:30:40.618 Max: 16 00:30:40.618 Min: 16 00:30:40.618 Number of Namespaces: 256 00:30:40.618 Compare Command: Supported 00:30:40.618 Write Uncorrectable Command: Not Supported 00:30:40.618 Dataset Management Command: Supported 00:30:40.618 Write Zeroes Command: Supported 00:30:40.618 Set Features Save Field: Supported 00:30:40.618 Reservations: Not Supported 00:30:40.618 Timestamp: Supported 00:30:40.618 Copy: Supported 00:30:40.618 Volatile Write Cache: Present 00:30:40.618 Atomic Write Unit (Normal): 1 00:30:40.618 Atomic Write Unit (PFail): 1 00:30:40.618 Atomic Compare & Write Unit: 1 00:30:40.618 Fused Compare & Write: Not Supported 00:30:40.618 Scatter-Gather List 00:30:40.618 SGL Command Set: Supported 00:30:40.618 SGL Keyed: Not Supported 00:30:40.618 SGL Bit Bucket Descriptor: Not Supported 00:30:40.618 SGL Metadata Pointer: Not Supported 00:30:40.618 Oversized SGL: Not Supported 00:30:40.618 SGL Metadata Address: Not Supported 00:30:40.618 SGL Offset: Not Supported 00:30:40.618 Transport SGL Data Block: Not Supported 00:30:40.618 Replay Protected Memory Block: Not Supported 00:30:40.618 00:30:40.618 Firmware Slot Information 00:30:40.618 ========================= 00:30:40.618 Active slot: 1 00:30:40.618 Slot 1 Firmware Revision: 1.0 00:30:40.618 00:30:40.618 00:30:40.618 Commands Supported and Effects 00:30:40.618 ============================== 00:30:40.618 Admin Commands 00:30:40.618 -------------- 00:30:40.618 Delete I/O Submission Queue (00h): Supported 00:30:40.618 Create I/O Submission Queue (01h): Supported 00:30:40.618 Get Log Page (02h): Supported 00:30:40.618 Delete I/O Completion Queue (04h): Supported 00:30:40.618 Create I/O Completion Queue (05h): Supported 00:30:40.618 Identify (06h): Supported 00:30:40.618 Abort (08h): Supported 00:30:40.618 Set Features (09h): Supported 00:30:40.618 Get Features (0Ah): Supported 00:30:40.618 Asynchronous Event Request (0Ch): Supported 00:30:40.618 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:40.618 Directive Send (19h): Supported 00:30:40.618 Directive Receive (1Ah): Supported 00:30:40.618 Virtualization Management (1Ch): Supported 00:30:40.618 Doorbell Buffer Config (7Ch): Supported 00:30:40.618 Format NVM (80h): Supported LBA-Change 00:30:40.618 I/O Commands 00:30:40.618 ------------ 00:30:40.618 Flush (00h): Supported LBA-Change 00:30:40.618 Write (01h): Supported LBA-Change 00:30:40.618 Read (02h): Supported 00:30:40.618 Compare (05h): Supported 00:30:40.618 Write Zeroes (08h): Supported LBA-Change 00:30:40.618 Dataset Management (09h): Supported LBA-Change 00:30:40.618 Unknown (0Ch): Supported 00:30:40.618 Unknown (12h): Supported 00:30:40.618 Copy (19h): Supported LBA-Change 00:30:40.618 Unknown (1Dh): Supported LBA-Change 00:30:40.618 00:30:40.618 Error Log 00:30:40.618 ========= 00:30:40.618 00:30:40.618 Arbitration 00:30:40.618 =========== 00:30:40.618 Arbitration Burst: no limit 00:30:40.618 00:30:40.618 Power Management 00:30:40.618 ================ 00:30:40.618 Number of Power States: 1 00:30:40.618 Current Power State: Power State #0 00:30:40.618 Power State #0: 00:30:40.618 Max Power: 25.00 W 00:30:40.618 Non-Operational State: Operational 00:30:40.618 Entry Latency: 16 microseconds 00:30:40.618 Exit Latency: 4 microseconds 00:30:40.618 Relative Read Throughput: 0 00:30:40.618 Relative Read Latency: 0 00:30:40.618 Relative Write Throughput: 0 00:30:40.618 Relative Write Latency: 0 00:30:40.618 Idle Power: Not Reported 00:30:40.618 Active Power: Not Reported 00:30:40.618 Non-Operational Permissive Mode: Not Supported 00:30:40.618 00:30:40.618 Health Information 00:30:40.618 ================== 00:30:40.618 Critical Warnings: 00:30:40.618 Available Spare Space: OK 00:30:40.618 Temperature: OK 00:30:40.618 Device Reliability: OK 00:30:40.618 Read Only: No 00:30:40.618 Volatile Memory Backup: OK 00:30:40.618 Current Temperature: 323 Kelvin (50 Celsius) 00:30:40.618 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:40.618 Available Spare: 0% 00:30:40.618 Available Spare Threshold: 0% 00:30:40.618 Life Percentage Used: 0% 00:30:40.618 Data Units Read: 9220 00:30:40.618 Data Units Written: 4519 00:30:40.618 Host Read Commands: 321468 00:30:40.618 Host Write Commands: 176086 00:30:40.618 Controller Busy Time: 0 minutes 00:30:40.618 Power Cycles: 0 00:30:40.618 Power On Hours: 0 hours 00:30:40.618 Unsafe Shutdowns: 0 00:30:40.618 Unrecoverable Media Errors: 0 00:30:40.618 Lifetime Error Log Entries: 0 00:30:40.618 Warning Temperature Time: 0 minutes 00:30:40.618 Critical Temperature Time: 0 minutes 00:30:40.618 00:30:40.618 Number of Queues 00:30:40.618 ================ 00:30:40.618 Number of I/O Submission Queues: 64 00:30:40.618 Number of I/O Completion Queues: 64 00:30:40.618 00:30:40.619 ZNS Specific Controller Data 00:30:40.619 ============================ 00:30:40.619 Zone Append Size Limit: 0 00:30:40.619 00:30:40.619 00:30:40.619 Active Namespaces 00:30:40.619 ================= 00:30:40.619 Namespace ID:1 00:30:40.619 Error Recovery Timeout: Unlimited 00:30:40.619 Command Set Identifier: NVM (00h) 00:30:40.619 Deallocate: Supported 00:30:40.619 Deallocated/Unwritten Error: Supported 00:30:40.619 Deallocated Read Value: All 0x00 00:30:40.619 Deallocate in Write Zeroes: Not Supported 00:30:40.619 Deallocated Guard Field: 0xFFFF 00:30:40.619 Flush: Supported 00:30:40.619 Reservation: Not Supported 00:30:40.619 Namespace Sharing Capabilities: Private 00:30:40.619 Size (in LBAs): 1310720 (5GiB) 00:30:40.619 Capacity (in LBAs): 1310720 (5GiB) 00:30:40.619 Utilization (in LBAs): 1310720 (5GiB) 00:30:40.619 Thin Provisioning: Not Supported 00:30:40.619 Per-NS Atomic Units: No 00:30:40.619 Maximum Single Source Range Length: 128 00:30:40.619 Maximum Copy Length: 128 00:30:40.619 Maximum Source Range Count: 128 00:30:40.619 NGUID/EUI64 Never Reused: No 00:30:40.619 Namespace Write Protected: No 00:30:40.619 Number of LBA Formats: 8 00:30:40.619 Current LBA Format: LBA Format #04 00:30:40.619 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:40.619 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:40.619 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:40.619 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:40.619 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:40.619 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:40.619 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:40.619 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:40.619 00:30:40.619 05:49:44 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:30:40.619 05:49:44 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:30:40.878 ===================================================== 00:30:40.878 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:40.878 ===================================================== 00:30:40.878 Controller Capabilities/Features 00:30:40.878 ================================ 00:30:40.878 Vendor ID: 1b36 00:30:40.878 Subsystem Vendor ID: 1af4 00:30:40.878 Serial Number: 12340 00:30:40.878 Model Number: QEMU NVMe Ctrl 00:30:40.878 Firmware Version: 8.0.0 00:30:40.878 Recommended Arb Burst: 6 00:30:40.878 IEEE OUI Identifier: 00 54 52 00:30:40.878 Multi-path I/O 00:30:40.878 May have multiple subsystem ports: No 00:30:40.878 May have multiple controllers: No 00:30:40.878 Associated with SR-IOV VF: No 00:30:40.878 Max Data Transfer Size: 524288 00:30:40.878 Max Number of Namespaces: 256 00:30:40.878 Max Number of I/O Queues: 64 00:30:40.878 NVMe Specification Version (VS): 1.4 00:30:40.878 NVMe Specification Version (Identify): 1.4 00:30:40.878 Maximum Queue Entries: 2048 00:30:40.878 Contiguous Queues Required: Yes 00:30:40.878 Arbitration Mechanisms Supported 00:30:40.878 Weighted Round Robin: Not Supported 00:30:40.878 Vendor Specific: Not Supported 00:30:40.878 Reset Timeout: 7500 ms 00:30:40.878 Doorbell Stride: 4 bytes 00:30:40.878 NVM Subsystem Reset: Not Supported 00:30:40.878 Command Sets Supported 00:30:40.878 NVM Command Set: Supported 00:30:40.878 Boot Partition: Not Supported 00:30:40.878 Memory Page Size Minimum: 4096 bytes 00:30:40.878 Memory Page Size Maximum: 65536 bytes 00:30:40.878 Persistent Memory Region: Not Supported 00:30:40.878 Optional Asynchronous Events Supported 00:30:40.878 Namespace Attribute Notices: Supported 00:30:40.878 Firmware Activation Notices: Not Supported 00:30:40.878 ANA Change Notices: Not Supported 00:30:40.878 PLE Aggregate Log Change Notices: Not Supported 00:30:40.878 LBA Status Info Alert Notices: Not Supported 00:30:40.878 EGE Aggregate Log Change Notices: Not Supported 00:30:40.878 Normal NVM Subsystem Shutdown event: Not Supported 00:30:40.878 Zone Descriptor Change Notices: Not Supported 00:30:40.878 Discovery Log Change Notices: Not Supported 00:30:40.878 Controller Attributes 00:30:40.878 128-bit Host Identifier: Not Supported 00:30:40.878 Non-Operational Permissive Mode: Not Supported 00:30:40.878 NVM Sets: Not Supported 00:30:40.878 Read Recovery Levels: Not Supported 00:30:40.878 Endurance Groups: Not Supported 00:30:40.878 Predictable Latency Mode: Not Supported 00:30:40.878 Traffic Based Keep ALive: Not Supported 00:30:40.878 Namespace Granularity: Not Supported 00:30:40.878 SQ Associations: Not Supported 00:30:40.878 UUID List: Not Supported 00:30:40.878 Multi-Domain Subsystem: Not Supported 00:30:40.878 Fixed Capacity Management: Not Supported 00:30:40.878 Variable Capacity Management: Not Supported 00:30:40.878 Delete Endurance Group: Not Supported 00:30:40.878 Delete NVM Set: Not Supported 00:30:40.878 Extended LBA Formats Supported: Supported 00:30:40.878 Flexible Data Placement Supported: Not Supported 00:30:40.878 00:30:40.878 Controller Memory Buffer Support 00:30:40.878 ================================ 00:30:40.878 Supported: No 00:30:40.878 00:30:40.878 Persistent Memory Region Support 00:30:40.878 ================================ 00:30:40.878 Supported: No 00:30:40.878 00:30:40.878 Admin Command Set Attributes 00:30:40.878 ============================ 00:30:40.878 Security Send/Receive: Not Supported 00:30:40.878 Format NVM: Supported 00:30:40.878 Firmware Activate/Download: Not Supported 00:30:40.878 Namespace Management: Supported 00:30:40.878 Device Self-Test: Not Supported 00:30:40.878 Directives: Supported 00:30:40.878 NVMe-MI: Not Supported 00:30:40.878 Virtualization Management: Not Supported 00:30:40.879 Doorbell Buffer Config: Supported 00:30:40.879 Get LBA Status Capability: Not Supported 00:30:40.879 Command & Feature Lockdown Capability: Not Supported 00:30:40.879 Abort Command Limit: 4 00:30:40.879 Async Event Request Limit: 4 00:30:40.879 Number of Firmware Slots: N/A 00:30:40.879 Firmware Slot 1 Read-Only: N/A 00:30:40.879 Firmware Activation Without Reset: N/A 00:30:40.879 Multiple Update Detection Support: N/A 00:30:40.879 Firmware Update Granularity: No Information Provided 00:30:40.879 Per-Namespace SMART Log: Yes 00:30:40.879 Asymmetric Namespace Access Log Page: Not Supported 00:30:40.879 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:30:40.879 Command Effects Log Page: Supported 00:30:40.879 Get Log Page Extended Data: Supported 00:30:40.879 Telemetry Log Pages: Not Supported 00:30:40.879 Persistent Event Log Pages: Not Supported 00:30:40.879 Supported Log Pages Log Page: May Support 00:30:40.879 Commands Supported & Effects Log Page: Not Supported 00:30:40.879 Feature Identifiers & Effects Log Page:May Support 00:30:40.879 NVMe-MI Commands & Effects Log Page: May Support 00:30:40.879 Data Area 4 for Telemetry Log: Not Supported 00:30:40.879 Error Log Page Entries Supported: 1 00:30:40.879 Keep Alive: Not Supported 00:30:40.879 00:30:40.879 NVM Command Set Attributes 00:30:40.879 ========================== 00:30:40.879 Submission Queue Entry Size 00:30:40.879 Max: 64 00:30:40.879 Min: 64 00:30:40.879 Completion Queue Entry Size 00:30:40.879 Max: 16 00:30:40.879 Min: 16 00:30:40.879 Number of Namespaces: 256 00:30:40.879 Compare Command: Supported 00:30:40.879 Write Uncorrectable Command: Not Supported 00:30:40.879 Dataset Management Command: Supported 00:30:40.879 Write Zeroes Command: Supported 00:30:40.879 Set Features Save Field: Supported 00:30:40.879 Reservations: Not Supported 00:30:40.879 Timestamp: Supported 00:30:40.879 Copy: Supported 00:30:40.879 Volatile Write Cache: Present 00:30:40.879 Atomic Write Unit (Normal): 1 00:30:40.879 Atomic Write Unit (PFail): 1 00:30:40.879 Atomic Compare & Write Unit: 1 00:30:40.879 Fused Compare & Write: Not Supported 00:30:40.879 Scatter-Gather List 00:30:40.879 SGL Command Set: Supported 00:30:40.879 SGL Keyed: Not Supported 00:30:40.879 SGL Bit Bucket Descriptor: Not Supported 00:30:40.879 SGL Metadata Pointer: Not Supported 00:30:40.879 Oversized SGL: Not Supported 00:30:40.879 SGL Metadata Address: Not Supported 00:30:40.879 SGL Offset: Not Supported 00:30:40.879 Transport SGL Data Block: Not Supported 00:30:40.879 Replay Protected Memory Block: Not Supported 00:30:40.879 00:30:40.879 Firmware Slot Information 00:30:40.879 ========================= 00:30:40.879 Active slot: 1 00:30:40.879 Slot 1 Firmware Revision: 1.0 00:30:40.879 00:30:40.879 00:30:40.879 Commands Supported and Effects 00:30:40.879 ============================== 00:30:40.879 Admin Commands 00:30:40.879 -------------- 00:30:40.879 Delete I/O Submission Queue (00h): Supported 00:30:40.879 Create I/O Submission Queue (01h): Supported 00:30:40.879 Get Log Page (02h): Supported 00:30:40.879 Delete I/O Completion Queue (04h): Supported 00:30:40.879 Create I/O Completion Queue (05h): Supported 00:30:40.879 Identify (06h): Supported 00:30:40.879 Abort (08h): Supported 00:30:40.879 Set Features (09h): Supported 00:30:40.879 Get Features (0Ah): Supported 00:30:40.879 Asynchronous Event Request (0Ch): Supported 00:30:40.879 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:40.879 Directive Send (19h): Supported 00:30:40.879 Directive Receive (1Ah): Supported 00:30:40.879 Virtualization Management (1Ch): Supported 00:30:40.879 Doorbell Buffer Config (7Ch): Supported 00:30:40.879 Format NVM (80h): Supported LBA-Change 00:30:40.879 I/O Commands 00:30:40.879 ------------ 00:30:40.879 Flush (00h): Supported LBA-Change 00:30:40.879 Write (01h): Supported LBA-Change 00:30:40.879 Read (02h): Supported 00:30:40.879 Compare (05h): Supported 00:30:40.879 Write Zeroes (08h): Supported LBA-Change 00:30:40.879 Dataset Management (09h): Supported LBA-Change 00:30:40.879 Unknown (0Ch): Supported 00:30:40.879 Unknown (12h): Supported 00:30:40.879 Copy (19h): Supported LBA-Change 00:30:40.879 Unknown (1Dh): Supported LBA-Change 00:30:40.879 00:30:40.879 Error Log 00:30:40.879 ========= 00:30:40.879 00:30:40.879 Arbitration 00:30:40.879 =========== 00:30:40.879 Arbitration Burst: no limit 00:30:40.879 00:30:40.879 Power Management 00:30:40.879 ================ 00:30:40.879 Number of Power States: 1 00:30:40.879 Current Power State: Power State #0 00:30:40.879 Power State #0: 00:30:40.879 Max Power: 25.00 W 00:30:40.879 Non-Operational State: Operational 00:30:40.879 Entry Latency: 16 microseconds 00:30:40.879 Exit Latency: 4 microseconds 00:30:40.879 Relative Read Throughput: 0 00:30:40.879 Relative Read Latency: 0 00:30:40.879 Relative Write Throughput: 0 00:30:40.879 Relative Write Latency: 0 00:30:40.879 Idle Power: Not Reported 00:30:40.879 Active Power: Not Reported 00:30:40.879 Non-Operational Permissive Mode: Not Supported 00:30:40.879 00:30:40.879 Health Information 00:30:40.879 ================== 00:30:40.879 Critical Warnings: 00:30:40.879 Available Spare Space: OK 00:30:40.879 Temperature: OK 00:30:40.879 Device Reliability: OK 00:30:40.879 Read Only: No 00:30:40.879 Volatile Memory Backup: OK 00:30:40.879 Current Temperature: 323 Kelvin (50 Celsius) 00:30:40.879 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:40.879 Available Spare: 0% 00:30:40.879 Available Spare Threshold: 0% 00:30:40.879 Life Percentage Used: 0% 00:30:40.879 Data Units Read: 9220 00:30:40.879 Data Units Written: 4519 00:30:40.879 Host Read Commands: 321468 00:30:40.879 Host Write Commands: 176086 00:30:40.879 Controller Busy Time: 0 minutes 00:30:40.879 Power Cycles: 0 00:30:40.879 Power On Hours: 0 hours 00:30:40.879 Unsafe Shutdowns: 0 00:30:40.879 Unrecoverable Media Errors: 0 00:30:40.879 Lifetime Error Log Entries: 0 00:30:40.879 Warning Temperature Time: 0 minutes 00:30:40.879 Critical Temperature Time: 0 minutes 00:30:40.879 00:30:40.879 Number of Queues 00:30:40.879 ================ 00:30:40.879 Number of I/O Submission Queues: 64 00:30:40.879 Number of I/O Completion Queues: 64 00:30:40.879 00:30:40.879 ZNS Specific Controller Data 00:30:40.879 ============================ 00:30:40.879 Zone Append Size Limit: 0 00:30:40.879 00:30:40.879 00:30:40.879 Active Namespaces 00:30:40.879 ================= 00:30:40.879 Namespace ID:1 00:30:40.879 Error Recovery Timeout: Unlimited 00:30:40.879 Command Set Identifier: NVM (00h) 00:30:40.879 Deallocate: Supported 00:30:40.879 Deallocated/Unwritten Error: Supported 00:30:40.879 Deallocated Read Value: All 0x00 00:30:40.879 Deallocate in Write Zeroes: Not Supported 00:30:40.879 Deallocated Guard Field: 0xFFFF 00:30:40.879 Flush: Supported 00:30:40.879 Reservation: Not Supported 00:30:40.879 Namespace Sharing Capabilities: Private 00:30:40.879 Size (in LBAs): 1310720 (5GiB) 00:30:40.879 Capacity (in LBAs): 1310720 (5GiB) 00:30:40.879 Utilization (in LBAs): 1310720 (5GiB) 00:30:40.879 Thin Provisioning: Not Supported 00:30:40.879 Per-NS Atomic Units: No 00:30:40.879 Maximum Single Source Range Length: 128 00:30:40.879 Maximum Copy Length: 128 00:30:40.879 Maximum Source Range Count: 128 00:30:40.879 NGUID/EUI64 Never Reused: No 00:30:40.879 Namespace Write Protected: No 00:30:40.879 Number of LBA Formats: 8 00:30:40.879 Current LBA Format: LBA Format #04 00:30:40.879 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:40.879 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:40.879 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:40.879 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:40.879 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:40.879 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:40.879 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:40.879 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:40.879 00:30:40.879 00:30:40.879 real 0m0.658s 00:30:40.879 user 0m0.248s 00:30:40.879 sys 0m0.302s 00:30:40.879 05:49:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:40.879 05:49:44 -- common/autotest_common.sh@10 -- # set +x 00:30:40.879 ************************************ 00:30:40.879 END TEST nvme_identify 00:30:40.879 ************************************ 00:30:40.879 05:49:44 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:30:40.879 05:49:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:40.879 05:49:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:40.879 05:49:44 -- common/autotest_common.sh@10 -- # set +x 00:30:40.879 ************************************ 00:30:40.879 START TEST nvme_perf 00:30:40.879 ************************************ 00:30:40.879 05:49:44 -- common/autotest_common.sh@1104 -- # nvme_perf 00:30:40.879 05:49:44 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:30:42.259 Initializing NVMe Controllers 00:30:42.259 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:42.259 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:30:42.259 Initialization complete. Launching workers. 00:30:42.259 ======================================================== 00:30:42.259 Latency(us) 00:30:42.259 Device Information : IOPS MiB/s Average min max 00:30:42.259 PCIE (0000:00:06.0) NSID 1 from core 0: 56177.56 658.33 2279.11 1290.77 8048.96 00:30:42.259 ======================================================== 00:30:42.259 Total : 56177.56 658.33 2279.11 1290.77 8048.96 00:30:42.259 00:30:42.259 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:30:42.259 ================================================================================= 00:30:42.259 1.00000% : 1459.665us 00:30:42.259 10.00000% : 1638.400us 00:30:42.259 25.00000% : 1869.265us 00:30:42.259 50.00000% : 2263.971us 00:30:42.259 75.00000% : 2636.335us 00:30:42.259 90.00000% : 2874.647us 00:30:42.259 95.00000% : 3068.276us 00:30:42.259 98.00000% : 3485.324us 00:30:42.259 99.00000% : 3649.164us 00:30:42.259 99.50000% : 3872.582us 00:30:42.259 99.90000% : 6136.553us 00:30:42.259 99.99000% : 7864.320us 00:30:42.259 99.99900% : 8102.633us 00:30:42.259 99.99990% : 8102.633us 00:30:42.259 99.99999% : 8102.633us 00:30:42.259 00:30:42.259 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:30:42.259 ============================================================================== 00:30:42.259 Range in us Cumulative IO count 00:30:42.259 1288.378 - 1295.825: 0.0018% ( 1) 00:30:42.259 1310.720 - 1318.167: 0.0053% ( 2) 00:30:42.259 1318.167 - 1325.615: 0.0071% ( 1) 00:30:42.259 1333.062 - 1340.509: 0.0125% ( 3) 00:30:42.259 1340.509 - 1347.956: 0.0178% ( 3) 00:30:42.259 1347.956 - 1355.404: 0.0303% ( 7) 00:30:42.259 1355.404 - 1362.851: 0.0409% ( 6) 00:30:42.259 1362.851 - 1370.298: 0.0587% ( 10) 00:30:42.259 1370.298 - 1377.745: 0.0765% ( 10) 00:30:42.259 1377.745 - 1385.193: 0.0979% ( 12) 00:30:42.259 1385.193 - 1392.640: 0.1353% ( 21) 00:30:42.259 1392.640 - 1400.087: 0.1815% ( 26) 00:30:42.259 1400.087 - 1407.535: 0.2313% ( 28) 00:30:42.259 1407.535 - 1414.982: 0.3061% ( 42) 00:30:42.259 1414.982 - 1422.429: 0.3897% ( 47) 00:30:42.259 1422.429 - 1429.876: 0.4965% ( 60) 00:30:42.259 1429.876 - 1437.324: 0.6300% ( 75) 00:30:42.259 1437.324 - 1444.771: 0.7741% ( 81) 00:30:42.259 1444.771 - 1452.218: 0.9557% ( 102) 00:30:42.259 1452.218 - 1459.665: 1.1514% ( 110) 00:30:42.259 1459.665 - 1467.113: 1.3507% ( 112) 00:30:42.259 1467.113 - 1474.560: 1.5678% ( 122) 00:30:42.259 1474.560 - 1482.007: 1.7832% ( 121) 00:30:42.259 1482.007 - 1489.455: 2.0644% ( 158) 00:30:42.259 1489.455 - 1496.902: 2.3811% ( 178) 00:30:42.259 1496.902 - 1504.349: 2.6730% ( 164) 00:30:42.259 1504.349 - 1511.796: 3.0022% ( 185) 00:30:42.259 1511.796 - 1519.244: 3.3492% ( 195) 00:30:42.259 1519.244 - 1526.691: 3.7496% ( 225) 00:30:42.259 1526.691 - 1534.138: 4.1073% ( 201) 00:30:42.259 1534.138 - 1541.585: 4.5024% ( 222) 00:30:42.259 1541.585 - 1549.033: 4.8868% ( 216) 00:30:42.259 1549.033 - 1556.480: 5.2694% ( 215) 00:30:42.259 1556.480 - 1563.927: 5.7072% ( 246) 00:30:42.259 1563.927 - 1571.375: 6.0970% ( 219) 00:30:42.259 1571.375 - 1578.822: 6.4938% ( 223) 00:30:42.259 1578.822 - 1586.269: 6.9547% ( 259) 00:30:42.259 1586.269 - 1593.716: 7.3925% ( 246) 00:30:42.259 1593.716 - 1601.164: 7.8232% ( 242) 00:30:42.259 1601.164 - 1608.611: 8.3001% ( 268) 00:30:42.259 1608.611 - 1616.058: 8.7521% ( 254) 00:30:42.259 1616.058 - 1623.505: 9.1970% ( 250) 00:30:42.259 1623.505 - 1630.953: 9.6562% ( 258) 00:30:42.259 1630.953 - 1638.400: 10.1420% ( 273) 00:30:42.259 1638.400 - 1645.847: 10.6189% ( 268) 00:30:42.259 1645.847 - 1653.295: 11.0852% ( 262) 00:30:42.259 1653.295 - 1660.742: 11.5746% ( 275) 00:30:42.259 1660.742 - 1668.189: 12.0266% ( 254) 00:30:42.259 1668.189 - 1675.636: 12.4858% ( 258) 00:30:42.259 1675.636 - 1683.084: 12.9841% ( 280) 00:30:42.259 1683.084 - 1690.531: 13.4681% ( 272) 00:30:42.259 1690.531 - 1697.978: 13.9219% ( 255) 00:30:42.259 1697.978 - 1705.425: 14.4202% ( 280) 00:30:42.259 1705.425 - 1712.873: 14.8847% ( 261) 00:30:42.259 1712.873 - 1720.320: 15.3527% ( 263) 00:30:42.259 1720.320 - 1727.767: 15.8261% ( 266) 00:30:42.259 1727.767 - 1735.215: 16.3173% ( 276) 00:30:42.259 1735.215 - 1742.662: 16.7889% ( 265) 00:30:42.259 1742.662 - 1750.109: 17.2783% ( 275) 00:30:42.259 1750.109 - 1757.556: 17.7605% ( 271) 00:30:42.259 1757.556 - 1765.004: 18.2464% ( 273) 00:30:42.259 1765.004 - 1772.451: 18.7251% ( 269) 00:30:42.259 1772.451 - 1779.898: 19.2323% ( 285) 00:30:42.259 1779.898 - 1787.345: 19.6914% ( 258) 00:30:42.259 1787.345 - 1794.793: 20.1808% ( 275) 00:30:42.259 1794.793 - 1802.240: 20.6595% ( 269) 00:30:42.259 1802.240 - 1809.687: 21.1151% ( 256) 00:30:42.259 1809.687 - 1817.135: 21.6205% ( 284) 00:30:42.259 1817.135 - 1824.582: 22.0957% ( 267) 00:30:42.259 1824.582 - 1832.029: 22.5602% ( 261) 00:30:42.259 1832.029 - 1839.476: 23.0567% ( 279) 00:30:42.259 1839.476 - 1846.924: 23.5283% ( 265) 00:30:42.259 1846.924 - 1854.371: 24.0088% ( 270) 00:30:42.259 1854.371 - 1861.818: 24.4946% ( 273) 00:30:42.259 1861.818 - 1869.265: 25.0053% ( 287) 00:30:42.259 1869.265 - 1876.713: 25.4645% ( 258) 00:30:42.259 1876.713 - 1884.160: 25.9379% ( 266) 00:30:42.259 1884.160 - 1891.607: 26.4237% ( 273) 00:30:42.259 1891.607 - 1899.055: 26.9042% ( 270) 00:30:42.259 1899.055 - 1906.502: 27.3651% ( 259) 00:30:42.259 1906.502 - 1921.396: 28.3563% ( 557) 00:30:42.259 1921.396 - 1936.291: 29.3209% ( 542) 00:30:42.259 1936.291 - 1951.185: 30.2997% ( 550) 00:30:42.259 1951.185 - 1966.080: 31.2749% ( 548) 00:30:42.259 1966.080 - 1980.975: 32.2199% ( 531) 00:30:42.259 1980.975 - 1995.869: 33.1809% ( 540) 00:30:42.259 1995.869 - 2010.764: 34.1508% ( 545) 00:30:42.259 2010.764 - 2025.658: 35.1189% ( 544) 00:30:42.259 2025.658 - 2040.553: 36.0870% ( 544) 00:30:42.259 2040.553 - 2055.447: 37.0551% ( 544) 00:30:42.259 2055.447 - 2070.342: 38.0179% ( 541) 00:30:42.259 2070.342 - 2085.236: 39.0091% ( 557) 00:30:42.259 2085.236 - 2100.131: 39.9683% ( 539) 00:30:42.259 2100.131 - 2115.025: 40.9311% ( 541) 00:30:42.259 2115.025 - 2129.920: 41.9152% ( 553) 00:30:42.259 2129.920 - 2144.815: 42.8904% ( 548) 00:30:42.259 2144.815 - 2159.709: 43.8870% ( 560) 00:30:42.259 2159.709 - 2174.604: 44.8551% ( 544) 00:30:42.259 2174.604 - 2189.498: 45.8019% ( 532) 00:30:42.259 2189.498 - 2204.393: 46.7860% ( 553) 00:30:42.259 2204.393 - 2219.287: 47.7790% ( 558) 00:30:42.259 2219.287 - 2234.182: 48.7507% ( 546) 00:30:42.259 2234.182 - 2249.076: 49.7384% ( 555) 00:30:42.259 2249.076 - 2263.971: 50.6958% ( 538) 00:30:42.259 2263.971 - 2278.865: 51.6924% ( 560) 00:30:42.259 2278.865 - 2293.760: 52.7050% ( 569) 00:30:42.259 2293.760 - 2308.655: 53.6518% ( 532) 00:30:42.259 2308.655 - 2323.549: 54.6466% ( 559) 00:30:42.259 2323.549 - 2338.444: 55.6307% ( 553) 00:30:42.259 2338.444 - 2353.338: 56.6130% ( 552) 00:30:42.259 2353.338 - 2368.233: 57.6221% ( 567) 00:30:42.259 2368.233 - 2383.127: 58.5920% ( 545) 00:30:42.259 2383.127 - 2398.022: 59.5423% ( 534) 00:30:42.259 2398.022 - 2412.916: 60.5229% ( 551) 00:30:42.259 2412.916 - 2427.811: 61.5105% ( 555) 00:30:42.259 2427.811 - 2442.705: 62.4911% ( 551) 00:30:42.259 2442.705 - 2457.600: 63.4877% ( 560) 00:30:42.259 2457.600 - 2472.495: 64.4433% ( 537) 00:30:42.259 2472.495 - 2487.389: 65.4470% ( 564) 00:30:42.259 2487.389 - 2502.284: 66.4294% ( 552) 00:30:42.259 2502.284 - 2517.178: 67.4064% ( 549) 00:30:42.260 2517.178 - 2532.073: 68.4012% ( 559) 00:30:42.260 2532.073 - 2546.967: 69.3711% ( 545) 00:30:42.260 2546.967 - 2561.862: 70.3196% ( 533) 00:30:42.260 2561.862 - 2576.756: 71.3180% ( 561) 00:30:42.260 2576.756 - 2591.651: 72.2772% ( 539) 00:30:42.260 2591.651 - 2606.545: 73.2595% ( 552) 00:30:42.260 2606.545 - 2621.440: 74.2365% ( 549) 00:30:42.260 2621.440 - 2636.335: 75.1833% ( 532) 00:30:42.260 2636.335 - 2651.229: 76.2066% ( 575) 00:30:42.260 2651.229 - 2666.124: 77.1836% ( 549) 00:30:42.260 2666.124 - 2681.018: 78.1410% ( 538) 00:30:42.260 2681.018 - 2695.913: 79.1465% ( 565) 00:30:42.260 2695.913 - 2710.807: 80.0897% ( 530) 00:30:42.260 2710.807 - 2725.702: 81.0667% ( 549) 00:30:42.260 2725.702 - 2740.596: 82.0793% ( 569) 00:30:42.260 2740.596 - 2755.491: 83.0154% ( 526) 00:30:42.260 2755.491 - 2770.385: 83.9924% ( 549) 00:30:42.260 2770.385 - 2785.280: 84.9623% ( 545) 00:30:42.260 2785.280 - 2800.175: 85.8752% ( 513) 00:30:42.260 2800.175 - 2815.069: 86.8540% ( 550) 00:30:42.260 2815.069 - 2829.964: 87.7242% ( 489) 00:30:42.260 2829.964 - 2844.858: 88.5713% ( 476) 00:30:42.260 2844.858 - 2859.753: 89.4273% ( 481) 00:30:42.260 2859.753 - 2874.647: 90.1463% ( 404) 00:30:42.260 2874.647 - 2889.542: 90.8225% ( 380) 00:30:42.260 2889.542 - 2904.436: 91.4668% ( 362) 00:30:42.260 2904.436 - 2919.331: 92.0238% ( 313) 00:30:42.260 2919.331 - 2934.225: 92.5132% ( 275) 00:30:42.260 2934.225 - 2949.120: 92.9634% ( 253) 00:30:42.260 2949.120 - 2964.015: 93.3656% ( 226) 00:30:42.260 2964.015 - 2978.909: 93.7375% ( 209) 00:30:42.260 2978.909 - 2993.804: 94.0543% ( 178) 00:30:42.260 2993.804 - 3008.698: 94.3319% ( 156) 00:30:42.260 3008.698 - 3023.593: 94.5651% ( 131) 00:30:42.260 3023.593 - 3038.487: 94.7733% ( 117) 00:30:42.260 3038.487 - 3053.382: 94.9797% ( 116) 00:30:42.260 3053.382 - 3068.276: 95.1488% ( 95) 00:30:42.260 3068.276 - 3083.171: 95.2929% ( 81) 00:30:42.260 3083.171 - 3098.065: 95.4175% ( 70) 00:30:42.260 3098.065 - 3112.960: 95.5456% ( 72) 00:30:42.260 3112.960 - 3127.855: 95.6684% ( 69) 00:30:42.260 3127.855 - 3142.749: 95.7859% ( 66) 00:30:42.260 3142.749 - 3157.644: 95.9051% ( 67) 00:30:42.260 3157.644 - 3172.538: 95.9977% ( 52) 00:30:42.260 3172.538 - 3187.433: 96.1080% ( 62) 00:30:42.260 3187.433 - 3202.327: 96.2183% ( 62) 00:30:42.260 3202.327 - 3217.222: 96.3180% ( 56) 00:30:42.260 3217.222 - 3232.116: 96.4212% ( 58) 00:30:42.260 3232.116 - 3247.011: 96.5173% ( 54) 00:30:42.260 3247.011 - 3261.905: 96.6152% ( 55) 00:30:42.260 3261.905 - 3276.800: 96.7077% ( 52) 00:30:42.260 3276.800 - 3291.695: 96.8003% ( 52) 00:30:42.260 3291.695 - 3306.589: 96.9035% ( 58) 00:30:42.260 3306.589 - 3321.484: 96.9836% ( 45) 00:30:42.260 3321.484 - 3336.378: 97.0797% ( 54) 00:30:42.260 3336.378 - 3351.273: 97.1811% ( 57) 00:30:42.260 3351.273 - 3366.167: 97.2879% ( 60) 00:30:42.260 3366.167 - 3381.062: 97.3893% ( 57) 00:30:42.260 3381.062 - 3395.956: 97.4836% ( 53) 00:30:42.260 3395.956 - 3410.851: 97.5851% ( 57) 00:30:42.260 3410.851 - 3425.745: 97.6758% ( 51) 00:30:42.260 3425.745 - 3440.640: 97.7773% ( 57) 00:30:42.260 3440.640 - 3455.535: 97.8769% ( 56) 00:30:42.260 3455.535 - 3470.429: 97.9623% ( 48) 00:30:42.260 3470.429 - 3485.324: 98.0531% ( 51) 00:30:42.260 3485.324 - 3500.218: 98.1528% ( 56) 00:30:42.260 3500.218 - 3515.113: 98.2560% ( 58) 00:30:42.260 3515.113 - 3530.007: 98.3556% ( 56) 00:30:42.260 3530.007 - 3544.902: 98.4464% ( 51) 00:30:42.260 3544.902 - 3559.796: 98.5354% ( 50) 00:30:42.260 3559.796 - 3574.691: 98.6368% ( 57) 00:30:42.260 3574.691 - 3589.585: 98.7205% ( 47) 00:30:42.260 3589.585 - 3604.480: 98.8148% ( 53) 00:30:42.260 3604.480 - 3619.375: 98.9002% ( 48) 00:30:42.260 3619.375 - 3634.269: 98.9767% ( 43) 00:30:42.260 3634.269 - 3649.164: 99.0461% ( 39) 00:30:42.260 3649.164 - 3664.058: 99.1138% ( 38) 00:30:42.260 3664.058 - 3678.953: 99.1849% ( 40) 00:30:42.260 3678.953 - 3693.847: 99.2401% ( 31) 00:30:42.260 3693.847 - 3708.742: 99.2917% ( 29) 00:30:42.260 3708.742 - 3723.636: 99.3309% ( 22) 00:30:42.260 3723.636 - 3738.531: 99.3629% ( 18) 00:30:42.260 3738.531 - 3753.425: 99.3896% ( 15) 00:30:42.260 3753.425 - 3768.320: 99.4074% ( 10) 00:30:42.260 3768.320 - 3783.215: 99.4252% ( 10) 00:30:42.260 3783.215 - 3798.109: 99.4430% ( 10) 00:30:42.260 3798.109 - 3813.004: 99.4554% ( 7) 00:30:42.260 3813.004 - 3842.793: 99.4875% ( 18) 00:30:42.260 3842.793 - 3872.582: 99.5124% ( 14) 00:30:42.260 3872.582 - 3902.371: 99.5426% ( 17) 00:30:42.260 3902.371 - 3932.160: 99.5622% ( 11) 00:30:42.260 3932.160 - 3961.949: 99.5818% ( 11) 00:30:42.260 3961.949 - 3991.738: 99.6014% ( 11) 00:30:42.260 3991.738 - 4021.527: 99.6156% ( 8) 00:30:42.260 4021.527 - 4051.316: 99.6334% ( 10) 00:30:42.260 4051.316 - 4081.105: 99.6512% ( 10) 00:30:42.260 4081.105 - 4110.895: 99.6690% ( 10) 00:30:42.260 4110.895 - 4140.684: 99.6886% ( 11) 00:30:42.260 4140.684 - 4170.473: 99.7028% ( 8) 00:30:42.260 4170.473 - 4200.262: 99.7224% ( 11) 00:30:42.260 4200.262 - 4230.051: 99.7348% ( 7) 00:30:42.260 4230.051 - 4259.840: 99.7491% ( 8) 00:30:42.260 4259.840 - 4289.629: 99.7615% ( 7) 00:30:42.260 4289.629 - 4319.418: 99.7687% ( 4) 00:30:42.260 4319.418 - 4349.207: 99.7758% ( 4) 00:30:42.260 4349.207 - 4378.996: 99.7811% ( 3) 00:30:42.260 4378.996 - 4408.785: 99.7864% ( 3) 00:30:42.260 4408.785 - 4438.575: 99.7918% ( 3) 00:30:42.260 4438.575 - 4468.364: 99.7971% ( 3) 00:30:42.260 4468.364 - 4498.153: 99.8025% ( 3) 00:30:42.260 4498.153 - 4527.942: 99.8060% ( 2) 00:30:42.260 4527.942 - 4557.731: 99.8114% ( 3) 00:30:42.260 4557.731 - 4587.520: 99.8167% ( 3) 00:30:42.260 4587.520 - 4617.309: 99.8203% ( 2) 00:30:42.260 4617.309 - 4647.098: 99.8220% ( 1) 00:30:42.260 4676.887 - 4706.676: 99.8238% ( 1) 00:30:42.260 4706.676 - 4736.465: 99.8256% ( 1) 00:30:42.260 4736.465 - 4766.255: 99.8274% ( 1) 00:30:42.260 4766.255 - 4796.044: 99.8292% ( 1) 00:30:42.260 4796.044 - 4825.833: 99.8309% ( 1) 00:30:42.260 4825.833 - 4855.622: 99.8327% ( 1) 00:30:42.260 4855.622 - 4885.411: 99.8345% ( 1) 00:30:42.260 4885.411 - 4915.200: 99.8363% ( 1) 00:30:42.260 4915.200 - 4944.989: 99.8381% ( 1) 00:30:42.260 4944.989 - 4974.778: 99.8398% ( 1) 00:30:42.260 5004.567 - 5034.356: 99.8416% ( 1) 00:30:42.260 5034.356 - 5064.145: 99.8434% ( 1) 00:30:42.260 5064.145 - 5093.935: 99.8452% ( 1) 00:30:42.260 5093.935 - 5123.724: 99.8470% ( 1) 00:30:42.260 5123.724 - 5153.513: 99.8487% ( 1) 00:30:42.260 5153.513 - 5183.302: 99.8505% ( 1) 00:30:42.260 5183.302 - 5213.091: 99.8523% ( 1) 00:30:42.260 5213.091 - 5242.880: 99.8541% ( 1) 00:30:42.260 5272.669 - 5302.458: 99.8559% ( 1) 00:30:42.260 5302.458 - 5332.247: 99.8576% ( 1) 00:30:42.260 5332.247 - 5362.036: 99.8594% ( 1) 00:30:42.260 5362.036 - 5391.825: 99.8612% ( 1) 00:30:42.260 5391.825 - 5421.615: 99.8630% ( 1) 00:30:42.260 5421.615 - 5451.404: 99.8647% ( 1) 00:30:42.260 5451.404 - 5481.193: 99.8665% ( 1) 00:30:42.260 5481.193 - 5510.982: 99.8683% ( 1) 00:30:42.260 5510.982 - 5540.771: 99.8701% ( 1) 00:30:42.260 5540.771 - 5570.560: 99.8719% ( 1) 00:30:42.260 5600.349 - 5630.138: 99.8736% ( 1) 00:30:42.260 5630.138 - 5659.927: 99.8754% ( 1) 00:30:42.260 5659.927 - 5689.716: 99.8772% ( 1) 00:30:42.260 5689.716 - 5719.505: 99.8790% ( 1) 00:30:42.260 5719.505 - 5749.295: 99.8808% ( 1) 00:30:42.260 5749.295 - 5779.084: 99.8825% ( 1) 00:30:42.260 5779.084 - 5808.873: 99.8843% ( 1) 00:30:42.260 5808.873 - 5838.662: 99.8861% ( 1) 00:30:42.260 5868.451 - 5898.240: 99.8879% ( 1) 00:30:42.260 5898.240 - 5928.029: 99.8897% ( 1) 00:30:42.260 5928.029 - 5957.818: 99.8914% ( 1) 00:30:42.260 5957.818 - 5987.607: 99.8932% ( 1) 00:30:42.260 5987.607 - 6017.396: 99.8950% ( 1) 00:30:42.260 6017.396 - 6047.185: 99.8968% ( 1) 00:30:42.260 6047.185 - 6076.975: 99.8986% ( 1) 00:30:42.260 6106.764 - 6136.553: 99.9003% ( 1) 00:30:42.260 6136.553 - 6166.342: 99.9021% ( 1) 00:30:42.260 6166.342 - 6196.131: 99.9039% ( 1) 00:30:42.260 6196.131 - 6225.920: 99.9057% ( 1) 00:30:42.260 6225.920 - 6255.709: 99.9075% ( 1) 00:30:42.260 6285.498 - 6315.287: 99.9092% ( 1) 00:30:42.260 6315.287 - 6345.076: 99.9110% ( 1) 00:30:42.260 6345.076 - 6374.865: 99.9128% ( 1) 00:30:42.260 6374.865 - 6404.655: 99.9146% ( 1) 00:30:42.260 6404.655 - 6434.444: 99.9164% ( 1) 00:30:42.260 6434.444 - 6464.233: 99.9181% ( 1) 00:30:42.260 6464.233 - 6494.022: 99.9199% ( 1) 00:30:42.260 6523.811 - 6553.600: 99.9217% ( 1) 00:30:42.260 6553.600 - 6583.389: 99.9235% ( 1) 00:30:42.260 6583.389 - 6613.178: 99.9253% ( 1) 00:30:42.260 6613.178 - 6642.967: 99.9270% ( 1) 00:30:42.260 6642.967 - 6672.756: 99.9288% ( 1) 00:30:42.260 6672.756 - 6702.545: 99.9306% ( 1) 00:30:42.260 6702.545 - 6732.335: 99.9324% ( 1) 00:30:42.260 6732.335 - 6762.124: 99.9342% ( 1) 00:30:42.260 6762.124 - 6791.913: 99.9359% ( 1) 00:30:42.260 6821.702 - 6851.491: 99.9377% ( 1) 00:30:42.260 6851.491 - 6881.280: 99.9395% ( 1) 00:30:42.260 6881.280 - 6911.069: 99.9413% ( 1) 00:30:42.261 6911.069 - 6940.858: 99.9431% ( 1) 00:30:42.261 6940.858 - 6970.647: 99.9448% ( 1) 00:30:42.261 6970.647 - 7000.436: 99.9466% ( 1) 00:30:42.261 7030.225 - 7060.015: 99.9484% ( 1) 00:30:42.261 7060.015 - 7089.804: 99.9502% ( 1) 00:30:42.261 7089.804 - 7119.593: 99.9520% ( 1) 00:30:42.261 7119.593 - 7149.382: 99.9537% ( 1) 00:30:42.261 7149.382 - 7179.171: 99.9555% ( 1) 00:30:42.261 7179.171 - 7208.960: 99.9573% ( 1) 00:30:42.261 7208.960 - 7238.749: 99.9591% ( 1) 00:30:42.261 7238.749 - 7268.538: 99.9608% ( 1) 00:30:42.261 7268.538 - 7298.327: 99.9626% ( 1) 00:30:42.261 7328.116 - 7357.905: 99.9644% ( 1) 00:30:42.261 7357.905 - 7387.695: 99.9662% ( 1) 00:30:42.261 7387.695 - 7417.484: 99.9680% ( 1) 00:30:42.261 7417.484 - 7447.273: 99.9697% ( 1) 00:30:42.261 7447.273 - 7477.062: 99.9715% ( 1) 00:30:42.261 7477.062 - 7506.851: 99.9733% ( 1) 00:30:42.261 7506.851 - 7536.640: 99.9751% ( 1) 00:30:42.261 7536.640 - 7566.429: 99.9769% ( 1) 00:30:42.261 7596.218 - 7626.007: 99.9786% ( 1) 00:30:42.261 7626.007 - 7685.585: 99.9822% ( 2) 00:30:42.261 7685.585 - 7745.164: 99.9858% ( 2) 00:30:42.261 7745.164 - 7804.742: 99.9893% ( 2) 00:30:42.261 7804.742 - 7864.320: 99.9929% ( 2) 00:30:42.261 7864.320 - 7923.898: 99.9964% ( 2) 00:30:42.261 7923.898 - 7983.476: 99.9982% ( 1) 00:30:42.261 8043.055 - 8102.633: 100.0000% ( 1) 00:30:42.261 00:30:42.261 05:49:46 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:30:43.638 Initializing NVMe Controllers 00:30:43.638 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:43.638 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:30:43.638 Initialization complete. Launching workers. 00:30:43.638 ======================================================== 00:30:43.638 Latency(us) 00:30:43.638 Device Information : IOPS MiB/s Average min max 00:30:43.638 PCIE (0000:00:06.0) NSID 1 from core 0: 57630.00 675.35 2220.67 1160.15 12648.87 00:30:43.638 ======================================================== 00:30:43.638 Total : 57630.00 675.35 2220.67 1160.15 12648.87 00:30:43.638 00:30:43.638 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:30:43.638 ================================================================================= 00:30:43.638 1.00000% : 1608.611us 00:30:43.638 10.00000% : 1854.371us 00:30:43.638 25.00000% : 1995.869us 00:30:43.638 50.00000% : 2159.709us 00:30:43.638 75.00000% : 2383.127us 00:30:43.638 90.00000% : 2666.124us 00:30:43.638 95.00000% : 2874.647us 00:30:43.638 98.00000% : 3157.644us 00:30:43.638 99.00000% : 3366.167us 00:30:43.638 99.50000% : 3589.585us 00:30:43.638 99.90000% : 5064.145us 00:30:43.638 99.99000% : 12570.996us 00:30:43.638 99.99900% : 12690.153us 00:30:43.638 99.99990% : 12690.153us 00:30:43.638 99.99999% : 12690.153us 00:30:43.638 00:30:43.638 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:30:43.638 ============================================================================== 00:30:43.638 Range in us Cumulative IO count 00:30:43.638 1154.327 - 1161.775: 0.0017% ( 1) 00:30:43.638 1169.222 - 1176.669: 0.0035% ( 1) 00:30:43.638 1199.011 - 1206.458: 0.0052% ( 1) 00:30:43.638 1213.905 - 1221.353: 0.0087% ( 2) 00:30:43.638 1243.695 - 1251.142: 0.0104% ( 1) 00:30:43.638 1251.142 - 1258.589: 0.0121% ( 1) 00:30:43.638 1258.589 - 1266.036: 0.0330% ( 12) 00:30:43.638 1266.036 - 1273.484: 0.0538% ( 12) 00:30:43.638 1273.484 - 1280.931: 0.0833% ( 17) 00:30:43.638 1280.931 - 1288.378: 0.0885% ( 3) 00:30:43.638 1288.378 - 1295.825: 0.0937% ( 3) 00:30:43.638 1295.825 - 1303.273: 0.0972% ( 2) 00:30:43.638 1303.273 - 1310.720: 0.1058% ( 5) 00:30:43.638 1310.720 - 1318.167: 0.1440% ( 22) 00:30:43.638 1318.167 - 1325.615: 0.1492% ( 3) 00:30:43.638 1325.615 - 1333.062: 0.1527% ( 2) 00:30:43.638 1340.509 - 1347.956: 0.1596% ( 4) 00:30:43.638 1347.956 - 1355.404: 0.1701% ( 6) 00:30:43.638 1355.404 - 1362.851: 0.1753% ( 3) 00:30:43.638 1362.851 - 1370.298: 0.1874% ( 7) 00:30:43.638 1370.298 - 1377.745: 0.1995% ( 7) 00:30:43.638 1377.745 - 1385.193: 0.2082% ( 5) 00:30:43.638 1385.193 - 1392.640: 0.2204% ( 7) 00:30:43.638 1392.640 - 1400.087: 0.2325% ( 7) 00:30:43.638 1400.087 - 1407.535: 0.2429% ( 6) 00:30:43.638 1407.535 - 1414.982: 0.2551% ( 7) 00:30:43.638 1414.982 - 1422.429: 0.2707% ( 9) 00:30:43.638 1422.429 - 1429.876: 0.2933% ( 13) 00:30:43.638 1429.876 - 1437.324: 0.3019% ( 5) 00:30:43.638 1437.324 - 1444.771: 0.3141% ( 7) 00:30:43.638 1444.771 - 1452.218: 0.3262% ( 7) 00:30:43.638 1452.218 - 1459.665: 0.3401% ( 8) 00:30:43.638 1459.665 - 1467.113: 0.3470% ( 4) 00:30:43.638 1467.113 - 1474.560: 0.3644% ( 10) 00:30:43.638 1474.560 - 1482.007: 0.3835% ( 11) 00:30:43.638 1482.007 - 1489.455: 0.4026% ( 11) 00:30:43.638 1489.455 - 1496.902: 0.4407% ( 22) 00:30:43.638 1496.902 - 1504.349: 0.4841% ( 25) 00:30:43.638 1504.349 - 1511.796: 0.5067% ( 13) 00:30:43.638 1511.796 - 1519.244: 0.5275% ( 12) 00:30:43.638 1519.244 - 1526.691: 0.5449% ( 10) 00:30:43.638 1526.691 - 1534.138: 0.5744% ( 17) 00:30:43.638 1534.138 - 1541.585: 0.6021% ( 16) 00:30:43.638 1541.585 - 1549.033: 0.6299% ( 16) 00:30:43.638 1549.033 - 1556.480: 0.6681% ( 22) 00:30:43.638 1556.480 - 1563.927: 0.7114% ( 25) 00:30:43.638 1563.927 - 1571.375: 0.7635% ( 30) 00:30:43.638 1571.375 - 1578.822: 0.7965% ( 19) 00:30:43.638 1578.822 - 1586.269: 0.8555% ( 34) 00:30:43.638 1586.269 - 1593.716: 0.9023% ( 27) 00:30:43.638 1593.716 - 1601.164: 0.9509% ( 28) 00:30:43.638 1601.164 - 1608.611: 1.0012% ( 29) 00:30:43.638 1608.611 - 1616.058: 1.0637% ( 36) 00:30:43.638 1616.058 - 1623.505: 1.1279% ( 37) 00:30:43.638 1623.505 - 1630.953: 1.1973% ( 40) 00:30:43.638 1630.953 - 1638.400: 1.2806% ( 48) 00:30:43.638 1638.400 - 1645.847: 1.3830% ( 59) 00:30:43.638 1645.847 - 1653.295: 1.4836% ( 58) 00:30:43.638 1653.295 - 1660.742: 1.6051% ( 70) 00:30:43.638 1660.742 - 1668.189: 1.7196% ( 66) 00:30:43.638 1668.189 - 1675.636: 1.8810% ( 93) 00:30:43.638 1675.636 - 1683.084: 2.0684% ( 108) 00:30:43.638 1683.084 - 1690.531: 2.2332% ( 95) 00:30:43.638 1690.531 - 1697.978: 2.3946% ( 93) 00:30:43.638 1697.978 - 1705.425: 2.5438% ( 86) 00:30:43.638 1705.425 - 1712.873: 2.7121% ( 97) 00:30:43.638 1712.873 - 1720.320: 2.9047% ( 111) 00:30:43.638 1720.320 - 1727.767: 3.0921% ( 108) 00:30:43.638 1727.767 - 1735.215: 3.3004% ( 120) 00:30:43.638 1735.215 - 1742.662: 3.5364% ( 136) 00:30:43.638 1742.662 - 1750.109: 3.8556% ( 184) 00:30:43.638 1750.109 - 1757.556: 4.1142% ( 149) 00:30:43.638 1757.556 - 1765.004: 4.4109% ( 171) 00:30:43.638 1765.004 - 1772.451: 4.7909% ( 219) 00:30:43.638 1772.451 - 1779.898: 5.1536% ( 209) 00:30:43.638 1779.898 - 1787.345: 5.5318% ( 218) 00:30:43.638 1787.345 - 1794.793: 5.9448% ( 238) 00:30:43.638 1794.793 - 1802.240: 6.4203% ( 274) 00:30:43.638 1802.240 - 1809.687: 6.8940% ( 273) 00:30:43.638 1809.687 - 1817.135: 7.4839% ( 340) 00:30:43.638 1817.135 - 1824.582: 7.9906% ( 292) 00:30:43.638 1824.582 - 1832.029: 8.5112% ( 300) 00:30:43.638 1832.029 - 1839.476: 9.0665% ( 320) 00:30:43.638 1839.476 - 1846.924: 9.6269% ( 323) 00:30:43.638 1846.924 - 1854.371: 10.4026% ( 447) 00:30:43.638 1854.371 - 1861.818: 11.0324% ( 363) 00:30:43.638 1861.818 - 1869.265: 11.7022% ( 386) 00:30:43.638 1869.265 - 1876.713: 12.4709% ( 443) 00:30:43.638 1876.713 - 1884.160: 13.1963% ( 418) 00:30:43.638 1884.160 - 1891.607: 13.9337% ( 425) 00:30:43.638 1891.607 - 1899.055: 14.7007% ( 442) 00:30:43.638 1899.055 - 1906.502: 15.4971% ( 459) 00:30:43.638 1906.502 - 1921.396: 16.9929% ( 862) 00:30:43.638 1921.396 - 1936.291: 18.5025% ( 870) 00:30:43.638 1936.291 - 1951.185: 20.4286% ( 1110) 00:30:43.638 1951.185 - 1966.080: 22.2714% ( 1062) 00:30:43.638 1966.080 - 1980.975: 24.0465% ( 1023) 00:30:43.639 1980.975 - 1995.869: 25.8338% ( 1030) 00:30:43.639 1995.869 - 2010.764: 27.7755% ( 1119) 00:30:43.639 2010.764 - 2025.658: 29.6651% ( 1089) 00:30:43.639 2025.658 - 2040.553: 31.6675% ( 1154) 00:30:43.639 2040.553 - 2055.447: 33.6804% ( 1160) 00:30:43.639 2055.447 - 2070.342: 35.8928% ( 1275) 00:30:43.639 2070.342 - 2085.236: 38.1416% ( 1296) 00:30:43.639 2085.236 - 2100.131: 40.6021% ( 1418) 00:30:43.639 2100.131 - 2115.025: 42.8128% ( 1274) 00:30:43.639 2115.025 - 2129.920: 45.0356% ( 1281) 00:30:43.639 2129.920 - 2144.815: 47.6904% ( 1530) 00:30:43.639 2144.815 - 2159.709: 50.2256% ( 1461) 00:30:43.639 2159.709 - 2174.604: 52.5768% ( 1355) 00:30:43.639 2174.604 - 2189.498: 54.8551% ( 1313) 00:30:43.639 2189.498 - 2204.393: 57.0363% ( 1257) 00:30:43.639 2204.393 - 2219.287: 59.2035% ( 1249) 00:30:43.639 2219.287 - 2234.182: 61.2077% ( 1155) 00:30:43.639 2234.182 - 2249.076: 63.2674% ( 1187) 00:30:43.639 2249.076 - 2263.971: 65.1067% ( 1060) 00:30:43.639 2263.971 - 2278.865: 66.8437% ( 1001) 00:30:43.639 2278.865 - 2293.760: 68.4695% ( 937) 00:30:43.639 2293.760 - 2308.655: 69.8334% ( 786) 00:30:43.639 2308.655 - 2323.549: 71.3621% ( 881) 00:30:43.639 2323.549 - 2338.444: 72.4605% ( 633) 00:30:43.639 2338.444 - 2353.338: 73.6786% ( 702) 00:30:43.639 2353.338 - 2368.233: 74.9558% ( 736) 00:30:43.639 2368.233 - 2383.127: 76.0455% ( 628) 00:30:43.639 2383.127 - 2398.022: 77.1091% ( 613) 00:30:43.639 2398.022 - 2412.916: 78.1190% ( 582) 00:30:43.639 2412.916 - 2427.811: 79.1272% ( 581) 00:30:43.639 2427.811 - 2442.705: 80.0538% ( 534) 00:30:43.639 2442.705 - 2457.600: 80.8971% ( 486) 00:30:43.639 2457.600 - 2472.495: 81.7369% ( 484) 00:30:43.639 2472.495 - 2487.389: 82.5993% ( 497) 00:30:43.639 2487.389 - 2502.284: 83.4496% ( 490) 00:30:43.639 2502.284 - 2517.178: 84.2183% ( 443) 00:30:43.639 2517.178 - 2532.073: 85.0425% ( 475) 00:30:43.639 2532.073 - 2546.967: 85.8841% ( 485) 00:30:43.639 2546.967 - 2561.862: 86.5504% ( 384) 00:30:43.639 2561.862 - 2576.756: 87.2046% ( 377) 00:30:43.639 2576.756 - 2591.651: 87.8032% ( 345) 00:30:43.639 2591.651 - 2606.545: 88.4331% ( 363) 00:30:43.639 2606.545 - 2621.440: 89.0040% ( 329) 00:30:43.639 2621.440 - 2636.335: 89.5332% ( 305) 00:30:43.639 2636.335 - 2651.229: 89.9844% ( 260) 00:30:43.639 2651.229 - 2666.124: 90.4442% ( 265) 00:30:43.639 2666.124 - 2681.018: 90.8780% ( 250) 00:30:43.639 2681.018 - 2695.913: 91.2563% ( 218) 00:30:43.639 2695.913 - 2710.807: 91.6346% ( 218) 00:30:43.639 2710.807 - 2725.702: 92.0059% ( 214) 00:30:43.639 2725.702 - 2740.596: 92.3616% ( 205) 00:30:43.639 2740.596 - 2755.491: 92.7052% ( 198) 00:30:43.639 2755.491 - 2770.385: 93.0106% ( 176) 00:30:43.639 2770.385 - 2785.280: 93.3420% ( 191) 00:30:43.639 2785.280 - 2800.175: 93.6474% ( 176) 00:30:43.639 2800.175 - 2815.069: 93.9493% ( 174) 00:30:43.639 2815.069 - 2829.964: 94.2391% ( 167) 00:30:43.639 2829.964 - 2844.858: 94.5428% ( 175) 00:30:43.639 2844.858 - 2859.753: 94.8221% ( 161) 00:30:43.639 2859.753 - 2874.647: 95.0859% ( 152) 00:30:43.639 2874.647 - 2889.542: 95.3358% ( 144) 00:30:43.639 2889.542 - 2904.436: 95.5579% ( 128) 00:30:43.639 2904.436 - 2919.331: 95.8025% ( 141) 00:30:43.639 2919.331 - 2934.225: 96.0142% ( 122) 00:30:43.639 2934.225 - 2949.120: 96.1912% ( 102) 00:30:43.639 2949.120 - 2964.015: 96.3804% ( 109) 00:30:43.639 2964.015 - 2978.909: 96.5573% ( 102) 00:30:43.639 2978.909 - 2993.804: 96.7343% ( 102) 00:30:43.639 2993.804 - 3008.698: 96.8836% ( 86) 00:30:43.639 3008.698 - 3023.593: 97.0172% ( 77) 00:30:43.639 3023.593 - 3038.487: 97.1629% ( 84) 00:30:43.639 3038.487 - 3053.382: 97.2965% ( 77) 00:30:43.639 3053.382 - 3068.276: 97.4163% ( 69) 00:30:43.639 3068.276 - 3083.171: 97.5325% ( 67) 00:30:43.639 3083.171 - 3098.065: 97.6505% ( 68) 00:30:43.639 3098.065 - 3112.960: 97.7651% ( 66) 00:30:43.639 3112.960 - 3127.855: 97.8692% ( 60) 00:30:43.639 3127.855 - 3142.749: 97.9611% ( 53) 00:30:43.639 3142.749 - 3157.644: 98.0600% ( 57) 00:30:43.639 3157.644 - 3172.538: 98.1381% ( 45) 00:30:43.639 3172.538 - 3187.433: 98.2318% ( 54) 00:30:43.639 3187.433 - 3202.327: 98.3099% ( 45) 00:30:43.639 3202.327 - 3217.222: 98.3880% ( 45) 00:30:43.639 3217.222 - 3232.116: 98.4626% ( 43) 00:30:43.639 3232.116 - 3247.011: 98.5320% ( 40) 00:30:43.639 3247.011 - 3261.905: 98.5945% ( 36) 00:30:43.639 3261.905 - 3276.800: 98.6552% ( 35) 00:30:43.639 3276.800 - 3291.695: 98.7264% ( 41) 00:30:43.639 3291.695 - 3306.589: 98.7732% ( 27) 00:30:43.639 3306.589 - 3321.484: 98.8322% ( 34) 00:30:43.639 3321.484 - 3336.378: 98.8912% ( 34) 00:30:43.639 3336.378 - 3351.273: 98.9450% ( 31) 00:30:43.639 3351.273 - 3366.167: 99.0092% ( 37) 00:30:43.639 3366.167 - 3381.062: 99.0665% ( 33) 00:30:43.639 3381.062 - 3395.956: 99.1150% ( 28) 00:30:43.639 3395.956 - 3410.851: 99.1671% ( 30) 00:30:43.639 3410.851 - 3425.745: 99.2018% ( 20) 00:30:43.639 3425.745 - 3440.640: 99.2469% ( 26) 00:30:43.639 3440.640 - 3455.535: 99.2816% ( 20) 00:30:43.639 3455.535 - 3470.429: 99.3163% ( 20) 00:30:43.639 3470.429 - 3485.324: 99.3476% ( 18) 00:30:43.639 3485.324 - 3500.218: 99.3788% ( 18) 00:30:43.639 3500.218 - 3515.113: 99.4031% ( 14) 00:30:43.639 3515.113 - 3530.007: 99.4309% ( 16) 00:30:43.639 3530.007 - 3544.902: 99.4517% ( 12) 00:30:43.639 3544.902 - 3559.796: 99.4760% ( 14) 00:30:43.639 3559.796 - 3574.691: 99.4951% ( 11) 00:30:43.639 3574.691 - 3589.585: 99.5107% ( 9) 00:30:43.639 3589.585 - 3604.480: 99.5263% ( 9) 00:30:43.639 3604.480 - 3619.375: 99.5384% ( 7) 00:30:43.639 3619.375 - 3634.269: 99.5558% ( 10) 00:30:43.639 3634.269 - 3649.164: 99.5679% ( 7) 00:30:43.639 3649.164 - 3664.058: 99.5766% ( 5) 00:30:43.639 3664.058 - 3678.953: 99.5905% ( 8) 00:30:43.639 3678.953 - 3693.847: 99.5992% ( 5) 00:30:43.639 3693.847 - 3708.742: 99.6096% ( 6) 00:30:43.639 3708.742 - 3723.636: 99.6200% ( 6) 00:30:43.639 3723.636 - 3738.531: 99.6321% ( 7) 00:30:43.639 3738.531 - 3753.425: 99.6460% ( 8) 00:30:43.639 3753.425 - 3768.320: 99.6582% ( 7) 00:30:43.639 3768.320 - 3783.215: 99.6773% ( 11) 00:30:43.639 3783.215 - 3798.109: 99.6859% ( 5) 00:30:43.639 3798.109 - 3813.004: 99.6981% ( 7) 00:30:43.639 3813.004 - 3842.793: 99.7206% ( 13) 00:30:43.639 3842.793 - 3872.582: 99.7432% ( 13) 00:30:43.639 3872.582 - 3902.371: 99.7588% ( 9) 00:30:43.639 3902.371 - 3932.160: 99.7692% ( 6) 00:30:43.639 3932.160 - 3961.949: 99.7762% ( 4) 00:30:43.639 3961.949 - 3991.738: 99.7831% ( 4) 00:30:43.639 3991.738 - 4021.527: 99.7900% ( 4) 00:30:43.639 4021.527 - 4051.316: 99.7970% ( 4) 00:30:43.639 4051.316 - 4081.105: 99.8039% ( 4) 00:30:43.639 4081.105 - 4110.895: 99.8091% ( 3) 00:30:43.639 4110.895 - 4140.684: 99.8126% ( 2) 00:30:43.639 4140.684 - 4170.473: 99.8143% ( 1) 00:30:43.639 4170.473 - 4200.262: 99.8195% ( 3) 00:30:43.639 4200.262 - 4230.051: 99.8230% ( 2) 00:30:43.639 4230.051 - 4259.840: 99.8282% ( 3) 00:30:43.639 4259.840 - 4289.629: 99.8299% ( 1) 00:30:43.639 4289.629 - 4319.418: 99.8334% ( 2) 00:30:43.639 4319.418 - 4349.207: 99.8369% ( 2) 00:30:43.639 4349.207 - 4378.996: 99.8386% ( 1) 00:30:43.639 4378.996 - 4408.785: 99.8438% ( 3) 00:30:43.639 4408.785 - 4438.575: 99.8456% ( 1) 00:30:43.639 4438.575 - 4468.364: 99.8508% ( 3) 00:30:43.639 4468.364 - 4498.153: 99.8577% ( 4) 00:30:43.639 4498.153 - 4527.942: 99.8629% ( 3) 00:30:43.639 4527.942 - 4557.731: 99.8681% ( 3) 00:30:43.639 4557.731 - 4587.520: 99.8733% ( 3) 00:30:43.639 4587.520 - 4617.309: 99.8803% ( 4) 00:30:43.639 4617.309 - 4647.098: 99.8820% ( 1) 00:30:43.639 4647.098 - 4676.887: 99.8872% ( 3) 00:30:43.639 4676.887 - 4706.676: 99.8907% ( 2) 00:30:43.639 4706.676 - 4736.465: 99.8924% ( 1) 00:30:43.639 4736.465 - 4766.255: 99.8942% ( 1) 00:30:43.639 4855.622 - 4885.411: 99.8959% ( 1) 00:30:43.639 4915.200 - 4944.989: 99.8976% ( 1) 00:30:43.639 4944.989 - 4974.778: 99.8994% ( 1) 00:30:43.639 5034.356 - 5064.145: 99.9011% ( 1) 00:30:43.639 5183.302 - 5213.091: 99.9028% ( 1) 00:30:43.639 5481.193 - 5510.982: 99.9046% ( 1) 00:30:43.639 5540.771 - 5570.560: 99.9063% ( 1) 00:30:43.639 5570.560 - 5600.349: 99.9098% ( 2) 00:30:43.639 5600.349 - 5630.138: 99.9115% ( 1) 00:30:43.639 5659.927 - 5689.716: 99.9132% ( 1) 00:30:43.639 5689.716 - 5719.505: 99.9150% ( 1) 00:30:43.639 5987.607 - 6017.396: 99.9167% ( 1) 00:30:43.639 6225.920 - 6255.709: 99.9184% ( 1) 00:30:43.639 6494.022 - 6523.811: 99.9202% ( 1) 00:30:43.639 6523.811 - 6553.600: 99.9219% ( 1) 00:30:43.639 6613.178 - 6642.967: 99.9237% ( 1) 00:30:43.639 6702.545 - 6732.335: 99.9254% ( 1) 00:30:43.639 7417.484 - 7447.273: 99.9271% ( 1) 00:30:43.639 9234.618 - 9294.196: 99.9289% ( 1) 00:30:43.639 9294.196 - 9353.775: 99.9323% ( 2) 00:30:43.639 9413.353 - 9472.931: 99.9358% ( 2) 00:30:43.639 9472.931 - 9532.509: 99.9375% ( 1) 00:30:43.639 9532.509 - 9592.087: 99.9410% ( 2) 00:30:43.639 9592.087 - 9651.665: 99.9427% ( 1) 00:30:43.639 9651.665 - 9711.244: 99.9462% ( 2) 00:30:43.639 9711.244 - 9770.822: 99.9531% ( 4) 00:30:43.639 9770.822 - 9830.400: 99.9566% ( 2) 00:30:43.639 9949.556 - 10009.135: 99.9584% ( 1) 00:30:43.639 10009.135 - 10068.713: 99.9601% ( 1) 00:30:43.639 10068.713 - 10128.291: 99.9618% ( 1) 00:30:43.639 10128.291 - 10187.869: 99.9636% ( 1) 00:30:43.639 10187.869 - 10247.447: 99.9653% ( 1) 00:30:43.639 10426.182 - 10485.760: 99.9705% ( 3) 00:30:43.639 10485.760 - 10545.338: 99.9740% ( 2) 00:30:43.639 10545.338 - 10604.916: 99.9757% ( 1) 00:30:43.640 11558.167 - 11617.745: 99.9774% ( 1) 00:30:43.640 11617.745 - 11677.324: 99.9809% ( 2) 00:30:43.640 11677.324 - 11736.902: 99.9861% ( 3) 00:30:43.640 11736.902 - 11796.480: 99.9879% ( 1) 00:30:43.640 12451.840 - 12511.418: 99.9896% ( 1) 00:30:43.640 12511.418 - 12570.996: 99.9948% ( 3) 00:30:43.640 12570.996 - 12630.575: 99.9983% ( 2) 00:30:43.640 12630.575 - 12690.153: 100.0000% ( 1) 00:30:43.640 00:30:43.640 05:49:47 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:30:43.640 00:30:43.640 real 0m2.714s 00:30:43.640 user 0m2.285s 00:30:43.640 sys 0m0.268s 00:30:43.640 05:49:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:43.640 05:49:47 -- common/autotest_common.sh@10 -- # set +x 00:30:43.640 ************************************ 00:30:43.640 END TEST nvme_perf 00:30:43.640 ************************************ 00:30:43.640 05:49:47 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:30:43.640 05:49:47 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:30:43.640 05:49:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:43.640 05:49:47 -- common/autotest_common.sh@10 -- # set +x 00:30:43.640 ************************************ 00:30:43.640 START TEST nvme_hello_world 00:30:43.640 ************************************ 00:30:43.640 05:49:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:30:44.206 Initializing NVMe Controllers 00:30:44.206 Attached to 0000:00:06.0 00:30:44.206 Namespace ID: 1 size: 5GB 00:30:44.206 Initialization complete. 00:30:44.206 INFO: using host memory buffer for IO 00:30:44.206 Hello world! 00:30:44.206 00:30:44.206 real 0m0.337s 00:30:44.206 user 0m0.137s 00:30:44.206 sys 0m0.130s 00:30:44.206 05:49:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:44.206 05:49:47 -- common/autotest_common.sh@10 -- # set +x 00:30:44.206 ************************************ 00:30:44.206 END TEST nvme_hello_world 00:30:44.206 ************************************ 00:30:44.206 05:49:47 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:30:44.206 05:49:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:44.206 05:49:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:44.206 05:49:47 -- common/autotest_common.sh@10 -- # set +x 00:30:44.206 ************************************ 00:30:44.206 START TEST nvme_sgl 00:30:44.206 ************************************ 00:30:44.206 05:49:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:30:44.464 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:30:44.464 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:30:44.464 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:30:44.464 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:30:44.464 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:30:44.464 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:30:44.464 NVMe Readv/Writev Request test 00:30:44.464 Attached to 0000:00:06.0 00:30:44.464 0000:00:06.0: build_io_request_2 test passed 00:30:44.464 0000:00:06.0: build_io_request_4 test passed 00:30:44.464 0000:00:06.0: build_io_request_5 test passed 00:30:44.464 0000:00:06.0: build_io_request_6 test passed 00:30:44.464 0000:00:06.0: build_io_request_7 test passed 00:30:44.464 0000:00:06.0: build_io_request_10 test passed 00:30:44.464 Cleaning up... 00:30:44.464 00:30:44.464 real 0m0.438s 00:30:44.464 user 0m0.253s 00:30:44.464 sys 0m0.122s 00:30:44.464 05:49:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:44.464 05:49:48 -- common/autotest_common.sh@10 -- # set +x 00:30:44.464 ************************************ 00:30:44.464 END TEST nvme_sgl 00:30:44.464 ************************************ 00:30:44.722 05:49:48 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:30:44.722 05:49:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:44.722 05:49:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:44.722 05:49:48 -- common/autotest_common.sh@10 -- # set +x 00:30:44.722 ************************************ 00:30:44.722 START TEST nvme_e2edp 00:30:44.722 ************************************ 00:30:44.722 05:49:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:30:44.981 NVMe Write/Read with End-to-End data protection test 00:30:44.981 Attached to 0000:00:06.0 00:30:44.981 Cleaning up... 00:30:44.981 00:30:44.981 real 0m0.354s 00:30:44.981 user 0m0.117s 00:30:44.981 sys 0m0.126s 00:30:44.981 05:49:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:44.981 05:49:48 -- common/autotest_common.sh@10 -- # set +x 00:30:44.981 ************************************ 00:30:44.981 END TEST nvme_e2edp 00:30:44.981 ************************************ 00:30:44.981 05:49:48 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:30:44.981 05:49:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:44.981 05:49:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:44.981 05:49:48 -- common/autotest_common.sh@10 -- # set +x 00:30:44.981 ************************************ 00:30:44.981 START TEST nvme_reserve 00:30:44.981 ************************************ 00:30:44.981 05:49:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:30:45.549 ===================================================== 00:30:45.549 NVMe Controller at PCI bus 0, device 6, function 0 00:30:45.549 ===================================================== 00:30:45.549 Reservations: Not Supported 00:30:45.549 Reservation test passed 00:30:45.549 00:30:45.549 real 0m0.354s 00:30:45.549 user 0m0.124s 00:30:45.549 sys 0m0.127s 00:30:45.549 05:49:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:45.549 05:49:49 -- common/autotest_common.sh@10 -- # set +x 00:30:45.549 ************************************ 00:30:45.549 END TEST nvme_reserve 00:30:45.549 ************************************ 00:30:45.549 05:49:49 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:30:45.549 05:49:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:45.549 05:49:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:45.550 05:49:49 -- common/autotest_common.sh@10 -- # set +x 00:30:45.550 ************************************ 00:30:45.550 START TEST nvme_err_injection 00:30:45.550 ************************************ 00:30:45.550 05:49:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:30:45.808 NVMe Error Injection test 00:30:45.808 Attached to 0000:00:06.0 00:30:45.808 0000:00:06.0: get features failed as expected 00:30:45.808 0000:00:06.0: get features successfully as expected 00:30:45.808 0000:00:06.0: read failed as expected 00:30:45.808 0000:00:06.0: read successfully as expected 00:30:45.808 Cleaning up... 00:30:45.808 00:30:45.808 real 0m0.303s 00:30:45.808 user 0m0.131s 00:30:45.808 sys 0m0.098s 00:30:45.808 05:49:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:45.808 05:49:49 -- common/autotest_common.sh@10 -- # set +x 00:30:45.808 ************************************ 00:30:45.808 END TEST nvme_err_injection 00:30:45.808 ************************************ 00:30:45.808 05:49:49 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:30:45.808 05:49:49 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:30:45.808 05:49:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:45.808 05:49:49 -- common/autotest_common.sh@10 -- # set +x 00:30:45.808 ************************************ 00:30:45.808 START TEST nvme_overhead 00:30:45.808 ************************************ 00:30:45.808 05:49:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:30:47.191 Initializing NVMe Controllers 00:30:47.191 Attached to 0000:00:06.0 00:30:47.191 Initialization complete. Launching workers. 00:30:47.191 submit (in ns) avg, min, max = 14777.6, 11045.5, 113570.0 00:30:47.191 complete (in ns) avg, min, max = 10362.9, 7359.1, 107515.5 00:30:47.191 00:30:47.191 Submit histogram 00:30:47.191 ================ 00:30:47.191 Range in us Cumulative Count 00:30:47.191 10.996 - 11.055: 0.0126% ( 1) 00:30:47.191 11.055 - 11.113: 0.0629% ( 4) 00:30:47.191 11.113 - 11.171: 0.3900% ( 26) 00:30:47.191 11.171 - 11.229: 1.6480% ( 100) 00:30:47.191 11.229 - 11.287: 3.8999% ( 179) 00:30:47.191 11.287 - 11.345: 8.1142% ( 335) 00:30:47.191 11.345 - 11.404: 13.9640% ( 465) 00:30:47.191 11.404 - 11.462: 19.2100% ( 417) 00:30:47.191 11.462 - 11.520: 23.7766% ( 363) 00:30:47.191 11.520 - 11.578: 26.8210% ( 242) 00:30:47.191 11.578 - 11.636: 29.2112% ( 190) 00:30:47.191 11.636 - 11.695: 31.6895% ( 197) 00:30:47.191 11.695 - 11.753: 35.1239% ( 273) 00:30:47.191 11.753 - 11.811: 39.7786% ( 370) 00:30:47.191 11.811 - 11.869: 44.9113% ( 408) 00:30:47.191 11.869 - 11.927: 49.7044% ( 381) 00:30:47.191 11.927 - 11.985: 53.3400% ( 289) 00:30:47.191 11.985 - 12.044: 56.1454% ( 223) 00:30:47.191 12.044 - 12.102: 58.4476% ( 183) 00:30:47.191 12.102 - 12.160: 60.3850% ( 154) 00:30:47.191 12.160 - 12.218: 62.0707% ( 134) 00:30:47.191 12.218 - 12.276: 63.0520% ( 78) 00:30:47.191 12.276 - 12.335: 64.0081% ( 76) 00:30:47.191 12.335 - 12.393: 64.8761% ( 69) 00:30:47.191 12.393 - 12.451: 65.5680% ( 55) 00:30:47.191 12.451 - 12.509: 65.9706% ( 32) 00:30:47.191 12.509 - 12.567: 66.4109% ( 35) 00:30:47.191 12.567 - 12.625: 66.9015% ( 39) 00:30:47.191 12.625 - 12.684: 67.4173% ( 41) 00:30:47.191 12.684 - 12.742: 67.7695% ( 28) 00:30:47.191 12.742 - 12.800: 68.1092% ( 27) 00:30:47.191 12.800 - 12.858: 68.2979% ( 15) 00:30:47.191 12.858 - 12.916: 68.6250% ( 26) 00:30:47.191 12.916 - 12.975: 68.8514% ( 18) 00:30:47.191 12.975 - 13.033: 69.1659% ( 25) 00:30:47.191 13.033 - 13.091: 69.4175% ( 20) 00:30:47.191 13.091 - 13.149: 69.6188% ( 16) 00:30:47.191 13.149 - 13.207: 69.7949% ( 14) 00:30:47.191 13.207 - 13.265: 69.9333% ( 11) 00:30:47.191 13.265 - 13.324: 69.9836% ( 4) 00:30:47.191 13.324 - 13.382: 70.0340% ( 4) 00:30:47.191 13.382 - 13.440: 70.0591% ( 2) 00:30:47.191 13.440 - 13.498: 70.1220% ( 5) 00:30:47.191 13.498 - 13.556: 70.1723% ( 4) 00:30:47.191 13.556 - 13.615: 70.2730% ( 8) 00:30:47.191 13.615 - 13.673: 70.3359% ( 5) 00:30:47.191 13.673 - 13.731: 70.4114% ( 6) 00:30:47.191 13.731 - 13.789: 70.4869% ( 6) 00:30:47.191 13.789 - 13.847: 70.5749% ( 7) 00:30:47.191 13.847 - 13.905: 70.6378% ( 5) 00:30:47.191 13.905 - 13.964: 70.6630% ( 2) 00:30:47.191 13.964 - 14.022: 70.7133% ( 4) 00:30:47.191 14.022 - 14.080: 70.7510% ( 3) 00:30:47.191 14.080 - 14.138: 70.7636% ( 1) 00:30:47.191 14.138 - 14.196: 70.8014% ( 3) 00:30:47.191 14.196 - 14.255: 70.8391% ( 3) 00:30:47.191 14.255 - 14.313: 70.8894% ( 4) 00:30:47.191 14.313 - 14.371: 70.9146% ( 2) 00:30:47.191 14.371 - 14.429: 70.9523% ( 3) 00:30:47.191 14.429 - 14.487: 70.9901% ( 3) 00:30:47.191 14.487 - 14.545: 71.0026% ( 1) 00:30:47.191 14.545 - 14.604: 71.0404% ( 3) 00:30:47.191 14.604 - 14.662: 71.0530% ( 1) 00:30:47.191 14.662 - 14.720: 71.0655% ( 1) 00:30:47.191 14.720 - 14.778: 71.0781% ( 1) 00:30:47.191 14.778 - 14.836: 71.0907% ( 1) 00:30:47.191 14.895 - 15.011: 71.1159% ( 2) 00:30:47.191 15.011 - 15.127: 71.1284% ( 1) 00:30:47.191 15.127 - 15.244: 71.1410% ( 1) 00:30:47.191 15.244 - 15.360: 71.1536% ( 1) 00:30:47.191 15.360 - 15.476: 71.1662% ( 1) 00:30:47.191 15.593 - 15.709: 71.2039% ( 3) 00:30:47.191 15.825 - 15.942: 71.2165% ( 1) 00:30:47.191 15.942 - 16.058: 71.2417% ( 2) 00:30:47.191 16.058 - 16.175: 71.2542% ( 1) 00:30:47.191 16.175 - 16.291: 71.2668% ( 1) 00:30:47.191 16.291 - 16.407: 71.3046% ( 3) 00:30:47.191 16.407 - 16.524: 72.3487% ( 83) 00:30:47.191 16.524 - 16.640: 76.0473% ( 294) 00:30:47.191 16.640 - 16.756: 78.8024% ( 219) 00:30:47.191 16.756 - 16.873: 79.7710% ( 77) 00:30:47.191 16.873 - 16.989: 80.4504% ( 54) 00:30:47.191 16.989 - 17.105: 80.7397% ( 23) 00:30:47.191 17.105 - 17.222: 80.9787% ( 19) 00:30:47.191 17.222 - 17.338: 81.0668% ( 7) 00:30:47.191 17.338 - 17.455: 81.1045% ( 3) 00:30:47.191 17.455 - 17.571: 81.2052% ( 8) 00:30:47.191 17.571 - 17.687: 81.8342% ( 50) 00:30:47.191 17.687 - 17.804: 82.8909% ( 84) 00:30:47.191 17.804 - 17.920: 83.7338% ( 67) 00:30:47.191 17.920 - 18.036: 84.4886% ( 60) 00:30:47.191 18.036 - 18.153: 85.0421% ( 44) 00:30:47.191 18.153 - 18.269: 85.3566% ( 25) 00:30:47.191 18.269 - 18.385: 85.4825% ( 10) 00:30:47.191 18.385 - 18.502: 85.5831% ( 8) 00:30:47.191 18.502 - 18.618: 85.6208% ( 3) 00:30:47.191 18.618 - 18.735: 85.6712% ( 4) 00:30:47.191 18.735 - 18.851: 85.7089% ( 3) 00:30:47.191 18.851 - 18.967: 85.7341% ( 2) 00:30:47.191 18.967 - 19.084: 85.7844% ( 4) 00:30:47.191 19.084 - 19.200: 85.7970% ( 1) 00:30:47.191 19.200 - 19.316: 85.8473% ( 4) 00:30:47.191 19.316 - 19.433: 85.8724% ( 2) 00:30:47.191 19.433 - 19.549: 85.8976% ( 2) 00:30:47.191 19.549 - 19.665: 85.9228% ( 2) 00:30:47.191 19.782 - 19.898: 85.9605% ( 3) 00:30:47.192 19.898 - 20.015: 85.9731% ( 1) 00:30:47.192 20.131 - 20.247: 85.9982% ( 2) 00:30:47.192 20.247 - 20.364: 86.0108% ( 1) 00:30:47.192 20.480 - 20.596: 86.0234% ( 1) 00:30:47.192 20.596 - 20.713: 86.0360% ( 1) 00:30:47.192 20.713 - 20.829: 86.0737% ( 3) 00:30:47.192 20.829 - 20.945: 86.0989% ( 2) 00:30:47.192 20.945 - 21.062: 86.1240% ( 2) 00:30:47.192 21.062 - 21.178: 86.1618% ( 3) 00:30:47.192 21.178 - 21.295: 86.1744% ( 1) 00:30:47.192 21.411 - 21.527: 86.1869% ( 1) 00:30:47.192 21.876 - 21.993: 86.1995% ( 1) 00:30:47.192 22.109 - 22.225: 86.2121% ( 1) 00:30:47.192 22.225 - 22.342: 86.2498% ( 3) 00:30:47.192 22.458 - 22.575: 86.2750% ( 2) 00:30:47.192 22.575 - 22.691: 86.3002% ( 2) 00:30:47.192 22.691 - 22.807: 86.3253% ( 2) 00:30:47.192 22.924 - 23.040: 86.3505% ( 2) 00:30:47.192 23.156 - 23.273: 86.3631% ( 1) 00:30:47.192 23.855 - 23.971: 86.3756% ( 1) 00:30:47.192 23.971 - 24.087: 86.4260% ( 4) 00:30:47.192 24.320 - 24.436: 86.4385% ( 1) 00:30:47.192 25.367 - 25.484: 86.4511% ( 1) 00:30:47.192 25.600 - 25.716: 86.4889% ( 3) 00:30:47.192 25.716 - 25.833: 86.6650% ( 14) 00:30:47.192 25.833 - 25.949: 87.1053% ( 35) 00:30:47.192 25.949 - 26.065: 87.8098% ( 56) 00:30:47.192 26.065 - 26.182: 88.9923% ( 94) 00:30:47.192 26.182 - 26.298: 90.7158% ( 137) 00:30:47.192 26.298 - 26.415: 92.0493% ( 106) 00:30:47.192 26.415 - 26.531: 93.1061% ( 84) 00:30:47.192 26.531 - 26.647: 93.7099% ( 48) 00:30:47.192 26.647 - 26.764: 93.9741% ( 21) 00:30:47.192 26.764 - 26.880: 94.2508% ( 22) 00:30:47.192 26.880 - 26.996: 94.4396% ( 15) 00:30:47.192 26.996 - 27.113: 94.6408% ( 16) 00:30:47.192 27.113 - 27.229: 94.8044% ( 13) 00:30:47.192 27.229 - 27.345: 94.9428% ( 11) 00:30:47.192 27.345 - 27.462: 95.2069% ( 21) 00:30:47.192 27.462 - 27.578: 95.6473% ( 35) 00:30:47.192 27.578 - 27.695: 96.3643% ( 57) 00:30:47.192 27.695 - 27.811: 97.2072% ( 67) 00:30:47.192 27.811 - 27.927: 98.1381% ( 74) 00:30:47.192 27.927 - 28.044: 98.6791% ( 43) 00:30:47.192 28.044 - 28.160: 98.9936% ( 25) 00:30:47.192 28.160 - 28.276: 99.0816% ( 7) 00:30:47.192 28.393 - 28.509: 99.1320% ( 4) 00:30:47.192 28.509 - 28.625: 99.1571% ( 2) 00:30:47.192 28.625 - 28.742: 99.1697% ( 1) 00:30:47.192 28.742 - 28.858: 99.1823% ( 1) 00:30:47.192 28.858 - 28.975: 99.1949% ( 1) 00:30:47.192 28.975 - 29.091: 99.2200% ( 2) 00:30:47.192 29.091 - 29.207: 99.2452% ( 2) 00:30:47.192 29.207 - 29.324: 99.2578% ( 1) 00:30:47.192 29.440 - 29.556: 99.2703% ( 1) 00:30:47.192 29.556 - 29.673: 99.2955% ( 2) 00:30:47.192 29.673 - 29.789: 99.3207% ( 2) 00:30:47.192 29.789 - 30.022: 99.3332% ( 1) 00:30:47.192 30.255 - 30.487: 99.3584% ( 2) 00:30:47.192 30.720 - 30.953: 99.3836% ( 2) 00:30:47.192 30.953 - 31.185: 99.3962% ( 1) 00:30:47.192 31.185 - 31.418: 99.4213% ( 2) 00:30:47.192 31.418 - 31.651: 99.4339% ( 1) 00:30:47.192 31.651 - 31.884: 99.4465% ( 1) 00:30:47.192 31.884 - 32.116: 99.4591% ( 1) 00:30:47.192 32.116 - 32.349: 99.4716% ( 1) 00:30:47.192 32.582 - 32.815: 99.4968% ( 2) 00:30:47.192 32.815 - 33.047: 99.5220% ( 2) 00:30:47.192 33.047 - 33.280: 99.5345% ( 1) 00:30:47.192 33.513 - 33.745: 99.5471% ( 1) 00:30:47.192 33.745 - 33.978: 99.5597% ( 1) 00:30:47.192 33.978 - 34.211: 99.5723% ( 1) 00:30:47.192 34.909 - 35.142: 99.5849% ( 1) 00:30:47.192 35.375 - 35.607: 99.5974% ( 1) 00:30:47.192 36.073 - 36.305: 99.6100% ( 1) 00:30:47.192 39.796 - 40.029: 99.6226% ( 1) 00:30:47.192 40.727 - 40.960: 99.6352% ( 1) 00:30:47.192 41.193 - 41.425: 99.6478% ( 1) 00:30:47.192 41.425 - 41.658: 99.6603% ( 1) 00:30:47.192 41.658 - 41.891: 99.6729% ( 1) 00:30:47.192 41.891 - 42.124: 99.6981% ( 2) 00:30:47.192 42.124 - 42.356: 99.7232% ( 2) 00:30:47.192 42.356 - 42.589: 99.7610% ( 3) 00:30:47.192 42.589 - 42.822: 99.7736% ( 1) 00:30:47.192 42.822 - 43.055: 99.7987% ( 2) 00:30:47.192 43.055 - 43.287: 99.8113% ( 1) 00:30:47.192 43.287 - 43.520: 99.8239% ( 1) 00:30:47.192 43.520 - 43.753: 99.8365% ( 1) 00:30:47.192 44.218 - 44.451: 99.8616% ( 2) 00:30:47.192 44.684 - 44.916: 99.8742% ( 1) 00:30:47.192 44.916 - 45.149: 99.8994% ( 2) 00:30:47.192 46.778 - 47.011: 99.9245% ( 2) 00:30:47.192 48.640 - 48.873: 99.9371% ( 1) 00:30:47.192 50.269 - 50.502: 99.9497% ( 1) 00:30:47.192 57.251 - 57.484: 99.9623% ( 1) 00:30:47.192 58.182 - 58.415: 99.9748% ( 1) 00:30:47.192 68.887 - 69.353: 99.9874% ( 1) 00:30:47.192 113.105 - 113.571: 100.0000% ( 1) 00:30:47.192 00:30:47.192 Complete histogram 00:30:47.192 ================== 00:30:47.192 Range in us Cumulative Count 00:30:47.192 7.331 - 7.360: 0.0126% ( 1) 00:30:47.192 7.360 - 7.389: 0.0252% ( 1) 00:30:47.192 7.389 - 7.418: 0.1132% ( 7) 00:30:47.192 7.418 - 7.447: 0.3019% ( 15) 00:30:47.192 7.447 - 7.505: 2.7173% ( 192) 00:30:47.192 7.505 - 7.564: 8.2400% ( 439) 00:30:47.192 7.564 - 7.622: 14.0018% ( 458) 00:30:47.192 7.622 - 7.680: 16.6562% ( 211) 00:30:47.192 7.680 - 7.738: 18.4426% ( 142) 00:30:47.192 7.738 - 7.796: 21.1976% ( 219) 00:30:47.192 7.796 - 7.855: 24.3804% ( 253) 00:30:47.192 7.855 - 7.913: 27.6010% ( 256) 00:30:47.192 7.913 - 7.971: 31.1234% ( 280) 00:30:47.192 7.971 - 8.029: 32.7588% ( 130) 00:30:47.192 8.029 - 8.087: 34.0294% ( 101) 00:30:47.192 8.087 - 8.145: 39.1370% ( 406) 00:30:47.192 8.145 - 8.204: 48.4086% ( 737) 00:30:47.192 8.204 - 8.262: 54.4597% ( 481) 00:30:47.192 8.262 - 8.320: 57.2651% ( 223) 00:30:47.192 8.320 - 8.378: 60.9762% ( 295) 00:30:47.192 8.378 - 8.436: 63.9955% ( 240) 00:30:47.192 8.436 - 8.495: 65.6435% ( 131) 00:30:47.192 8.495 - 8.553: 66.6122% ( 77) 00:30:47.192 8.553 - 8.611: 67.4047% ( 63) 00:30:47.192 8.611 - 8.669: 68.2350% ( 66) 00:30:47.192 8.669 - 8.727: 68.9646% ( 58) 00:30:47.192 8.727 - 8.785: 69.3924% ( 34) 00:30:47.192 8.785 - 8.844: 69.8075% ( 33) 00:30:47.192 8.844 - 8.902: 70.2478% ( 35) 00:30:47.192 8.902 - 8.960: 70.6756% ( 34) 00:30:47.192 8.960 - 9.018: 70.9775% ( 24) 00:30:47.192 9.018 - 9.076: 71.2165% ( 19) 00:30:47.192 9.076 - 9.135: 71.4555% ( 19) 00:30:47.192 9.135 - 9.193: 71.6568% ( 16) 00:30:47.192 9.193 - 9.251: 71.8078% ( 12) 00:30:47.192 9.251 - 9.309: 71.9713% ( 13) 00:30:47.192 9.309 - 9.367: 72.1474% ( 14) 00:30:47.192 9.367 - 9.425: 72.3236% ( 14) 00:30:47.192 9.425 - 9.484: 72.4368% ( 9) 00:30:47.192 9.484 - 9.542: 72.5248% ( 7) 00:30:47.192 9.542 - 9.600: 72.6003% ( 6) 00:30:47.192 9.600 - 9.658: 72.6758% ( 6) 00:30:47.192 9.658 - 9.716: 72.7261% ( 4) 00:30:47.193 9.716 - 9.775: 72.7639% ( 3) 00:30:47.193 9.775 - 9.833: 72.8268% ( 5) 00:30:47.193 9.833 - 9.891: 72.8394% ( 1) 00:30:47.193 9.891 - 9.949: 72.8771% ( 3) 00:30:47.193 9.949 - 10.007: 72.9148% ( 3) 00:30:47.193 10.007 - 10.065: 72.9526% ( 3) 00:30:47.193 10.065 - 10.124: 72.9777% ( 2) 00:30:47.193 10.124 - 10.182: 73.0281% ( 4) 00:30:47.193 10.240 - 10.298: 73.0406% ( 1) 00:30:47.193 10.415 - 10.473: 73.0784% ( 3) 00:30:47.193 10.473 - 10.531: 73.0910% ( 1) 00:30:47.193 10.705 - 10.764: 73.1035% ( 1) 00:30:47.193 10.764 - 10.822: 73.1161% ( 1) 00:30:47.193 10.822 - 10.880: 73.1287% ( 1) 00:30:47.193 10.880 - 10.938: 73.1664% ( 3) 00:30:47.193 10.996 - 11.055: 73.1916% ( 2) 00:30:47.193 11.055 - 11.113: 73.2797% ( 7) 00:30:47.193 11.113 - 11.171: 73.6948% ( 33) 00:30:47.193 11.171 - 11.229: 74.9654% ( 101) 00:30:47.193 11.229 - 11.287: 76.1605% ( 95) 00:30:47.193 11.287 - 11.345: 76.9531% ( 63) 00:30:47.193 11.345 - 11.404: 77.3682% ( 33) 00:30:47.193 11.404 - 11.462: 78.3243% ( 76) 00:30:47.193 11.462 - 11.520: 80.3875% ( 164) 00:30:47.193 11.520 - 11.578: 83.7086% ( 264) 00:30:47.193 11.578 - 11.636: 85.7844% ( 165) 00:30:47.193 11.636 - 11.695: 86.6147% ( 66) 00:30:47.193 11.695 - 11.753: 86.7782% ( 13) 00:30:47.193 11.753 - 11.811: 86.8789% ( 8) 00:30:47.193 11.811 - 11.869: 86.9166% ( 3) 00:30:47.193 11.927 - 11.985: 86.9543% ( 3) 00:30:47.193 11.985 - 12.044: 87.0172% ( 5) 00:30:47.193 12.044 - 12.102: 87.1179% ( 8) 00:30:47.193 12.102 - 12.160: 87.1808% ( 5) 00:30:47.193 12.160 - 12.218: 87.2185% ( 3) 00:30:47.193 12.218 - 12.276: 87.3317% ( 9) 00:30:47.193 12.276 - 12.335: 87.4324% ( 8) 00:30:47.193 12.335 - 12.393: 87.4827% ( 4) 00:30:47.193 12.393 - 12.451: 87.5079% ( 2) 00:30:47.193 12.451 - 12.509: 87.5833% ( 6) 00:30:47.193 12.509 - 12.567: 87.6085% ( 2) 00:30:47.193 12.567 - 12.625: 87.6211% ( 1) 00:30:47.193 12.625 - 12.684: 87.6588% ( 3) 00:30:47.193 12.684 - 12.742: 87.7091% ( 4) 00:30:47.193 12.742 - 12.800: 87.7217% ( 1) 00:30:47.193 12.800 - 12.858: 87.7595% ( 3) 00:30:47.193 12.858 - 12.916: 87.7720% ( 1) 00:30:47.193 12.975 - 13.033: 87.7846% ( 1) 00:30:47.193 13.033 - 13.091: 87.8224% ( 3) 00:30:47.193 13.091 - 13.149: 87.8475% ( 2) 00:30:47.193 13.149 - 13.207: 87.8853% ( 3) 00:30:47.193 13.207 - 13.265: 87.9104% ( 2) 00:30:47.193 13.265 - 13.324: 87.9482% ( 3) 00:30:47.193 13.324 - 13.382: 87.9607% ( 1) 00:30:47.193 13.382 - 13.440: 87.9733% ( 1) 00:30:47.193 13.440 - 13.498: 88.0111% ( 3) 00:30:47.193 13.498 - 13.556: 88.0362% ( 2) 00:30:47.193 13.556 - 13.615: 88.0991% ( 5) 00:30:47.193 13.615 - 13.673: 88.1117% ( 1) 00:30:47.193 13.673 - 13.731: 88.1495% ( 3) 00:30:47.193 13.731 - 13.789: 88.1872% ( 3) 00:30:47.193 13.905 - 13.964: 88.2124% ( 2) 00:30:47.193 13.964 - 14.022: 88.2375% ( 2) 00:30:47.193 14.022 - 14.080: 88.2501% ( 1) 00:30:47.193 14.080 - 14.138: 88.2753% ( 2) 00:30:47.193 14.138 - 14.196: 88.3004% ( 2) 00:30:47.193 14.255 - 14.313: 88.3130% ( 1) 00:30:47.193 14.313 - 14.371: 88.3256% ( 1) 00:30:47.193 14.487 - 14.545: 88.3382% ( 1) 00:30:47.193 14.545 - 14.604: 88.3633% ( 2) 00:30:47.193 14.720 - 14.778: 88.3759% ( 1) 00:30:47.193 14.778 - 14.836: 88.3885% ( 1) 00:30:47.193 14.836 - 14.895: 88.4011% ( 1) 00:30:47.193 14.895 - 15.011: 88.4136% ( 1) 00:30:47.193 15.011 - 15.127: 88.4388% ( 2) 00:30:47.193 15.127 - 15.244: 88.4765% ( 3) 00:30:47.193 15.360 - 15.476: 88.4891% ( 1) 00:30:47.193 15.476 - 15.593: 88.5017% ( 1) 00:30:47.193 15.593 - 15.709: 88.5269% ( 2) 00:30:47.193 15.709 - 15.825: 88.5394% ( 1) 00:30:47.193 15.825 - 15.942: 88.5520% ( 1) 00:30:47.193 15.942 - 16.058: 88.5772% ( 2) 00:30:47.193 16.175 - 16.291: 88.6023% ( 2) 00:30:47.193 16.291 - 16.407: 88.6149% ( 1) 00:30:47.193 16.640 - 16.756: 88.6401% ( 2) 00:30:47.193 16.873 - 16.989: 88.6527% ( 1) 00:30:47.193 17.105 - 17.222: 88.6652% ( 1) 00:30:47.193 17.222 - 17.338: 88.7030% ( 3) 00:30:47.193 17.338 - 17.455: 88.7281% ( 2) 00:30:47.193 17.687 - 17.804: 88.7407% ( 1) 00:30:47.193 17.804 - 17.920: 88.7533% ( 1) 00:30:47.193 17.920 - 18.036: 88.7785% ( 2) 00:30:47.193 18.036 - 18.153: 88.8036% ( 2) 00:30:47.193 18.153 - 18.269: 88.8288% ( 2) 00:30:47.193 18.269 - 18.385: 88.8414% ( 1) 00:30:47.193 18.385 - 18.502: 88.8539% ( 1) 00:30:47.193 18.502 - 18.618: 88.8791% ( 2) 00:30:47.193 18.618 - 18.735: 88.8917% ( 1) 00:30:47.193 18.735 - 18.851: 88.9294% ( 3) 00:30:47.193 18.851 - 18.967: 88.9672% ( 3) 00:30:47.193 18.967 - 19.084: 89.0049% ( 3) 00:30:47.193 19.084 - 19.200: 89.0175% ( 1) 00:30:47.193 19.200 - 19.316: 89.0678% ( 4) 00:30:47.193 19.316 - 19.433: 89.1055% ( 3) 00:30:47.193 19.782 - 19.898: 89.1181% ( 1) 00:30:47.193 19.898 - 20.015: 89.1307% ( 1) 00:30:47.193 20.015 - 20.131: 89.1810% ( 4) 00:30:47.193 20.247 - 20.364: 89.2188% ( 3) 00:30:47.193 20.364 - 20.480: 89.2313% ( 1) 00:30:47.193 20.713 - 20.829: 89.2439% ( 1) 00:30:47.193 20.829 - 20.945: 89.2565% ( 1) 00:30:47.193 20.945 - 21.062: 89.2817% ( 2) 00:30:47.193 21.178 - 21.295: 89.2943% ( 1) 00:30:47.193 21.411 - 21.527: 89.3194% ( 2) 00:30:47.193 21.527 - 21.644: 89.3320% ( 1) 00:30:47.193 21.760 - 21.876: 89.3446% ( 1) 00:30:47.193 21.876 - 21.993: 89.4830% ( 11) 00:30:47.193 21.993 - 22.109: 89.8855% ( 32) 00:30:47.193 22.109 - 22.225: 90.6152% ( 58) 00:30:47.193 22.225 - 22.342: 91.7348% ( 89) 00:30:47.193 22.342 - 22.458: 92.5903% ( 68) 00:30:47.193 22.458 - 22.575: 93.4206% ( 66) 00:30:47.193 22.575 - 22.691: 94.0621% ( 51) 00:30:47.193 22.691 - 22.807: 94.5779% ( 41) 00:30:47.193 22.807 - 22.924: 94.9302% ( 28) 00:30:47.193 22.924 - 23.040: 95.1566% ( 18) 00:30:47.193 23.040 - 23.156: 95.3831% ( 18) 00:30:47.193 23.156 - 23.273: 95.5466% ( 13) 00:30:47.193 23.273 - 23.389: 95.6473% ( 8) 00:30:47.193 23.389 - 23.505: 95.8611% ( 17) 00:30:47.193 23.505 - 23.622: 96.3266% ( 37) 00:30:47.193 23.622 - 23.738: 96.8550% ( 42) 00:30:47.193 23.738 - 23.855: 97.4965% ( 51) 00:30:47.193 23.855 - 23.971: 98.1004% ( 48) 00:30:47.193 23.971 - 24.087: 98.5659% ( 37) 00:30:47.193 24.087 - 24.204: 98.8929% ( 26) 00:30:47.193 24.204 - 24.320: 99.1068% ( 17) 00:30:47.193 24.320 - 24.436: 99.2578% ( 12) 00:30:47.193 24.436 - 24.553: 99.3458% ( 7) 00:30:47.193 24.553 - 24.669: 99.3710% ( 2) 00:30:47.193 24.785 - 24.902: 99.3836% ( 1) 00:30:47.193 25.135 - 25.251: 99.4087% ( 2) 00:30:47.193 25.484 - 25.600: 99.4213% ( 1) 00:30:47.193 26.531 - 26.647: 99.4339% ( 1) 00:30:47.193 26.880 - 26.996: 99.4465% ( 1) 00:30:47.193 26.996 - 27.113: 99.4716% ( 2) 00:30:47.193 27.113 - 27.229: 99.5094% ( 3) 00:30:47.193 27.345 - 27.462: 99.5471% ( 3) 00:30:47.193 27.462 - 27.578: 99.5597% ( 1) 00:30:47.193 27.578 - 27.695: 99.5974% ( 3) 00:30:47.193 27.927 - 28.044: 99.6100% ( 1) 00:30:47.193 28.044 - 28.160: 99.6226% ( 1) 00:30:47.193 28.276 - 28.393: 99.6478% ( 2) 00:30:47.193 28.625 - 28.742: 99.6603% ( 1) 00:30:47.193 28.858 - 28.975: 99.7107% ( 4) 00:30:47.193 28.975 - 29.091: 99.7232% ( 1) 00:30:47.193 29.207 - 29.324: 99.7484% ( 2) 00:30:47.193 29.556 - 29.673: 99.7610% ( 1) 00:30:47.194 29.789 - 30.022: 99.7736% ( 1) 00:30:47.194 30.022 - 30.255: 99.7861% ( 1) 00:30:47.194 30.255 - 30.487: 99.7987% ( 1) 00:30:47.194 30.487 - 30.720: 99.8239% ( 2) 00:30:47.194 30.720 - 30.953: 99.8365% ( 1) 00:30:47.194 32.582 - 32.815: 99.8490% ( 1) 00:30:47.194 32.815 - 33.047: 99.8616% ( 1) 00:30:47.194 33.047 - 33.280: 99.8742% ( 1) 00:30:47.194 33.513 - 33.745: 99.8868% ( 1) 00:30:47.194 35.607 - 35.840: 99.8994% ( 1) 00:30:47.194 37.935 - 38.167: 99.9119% ( 1) 00:30:47.194 39.564 - 39.796: 99.9245% ( 1) 00:30:47.194 41.658 - 41.891: 99.9371% ( 1) 00:30:47.194 43.055 - 43.287: 99.9497% ( 1) 00:30:47.194 43.520 - 43.753: 99.9623% ( 1) 00:30:47.194 50.735 - 50.967: 99.9748% ( 1) 00:30:47.194 102.400 - 102.865: 99.9874% ( 1) 00:30:47.194 107.055 - 107.520: 100.0000% ( 1) 00:30:47.194 00:30:47.194 00:30:47.194 real 0m1.315s 00:30:47.194 user 0m1.128s 00:30:47.194 sys 0m0.089s 00:30:47.194 05:49:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:47.194 ************************************ 00:30:47.194 END TEST nvme_overhead 00:30:47.194 ************************************ 00:30:47.194 05:49:50 -- common/autotest_common.sh@10 -- # set +x 00:30:47.194 05:49:50 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:30:47.194 05:49:50 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:30:47.194 05:49:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:47.194 05:49:50 -- common/autotest_common.sh@10 -- # set +x 00:30:47.194 ************************************ 00:30:47.194 START TEST nvme_arbitration 00:30:47.194 ************************************ 00:30:47.194 05:49:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:30:51.422 Initializing NVMe Controllers 00:30:51.422 Attached to 0000:00:06.0 00:30:51.422 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:30:51.422 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:30:51.422 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:30:51.422 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:30:51.422 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:30:51.422 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:30:51.422 Initialization complete. Launching workers. 00:30:51.422 Starting thread on core 1 with urgent priority queue 00:30:51.422 Starting thread on core 2 with urgent priority queue 00:30:51.422 Starting thread on core 3 with urgent priority queue 00:30:51.422 Starting thread on core 0 with urgent priority queue 00:30:51.422 QEMU NVMe Ctrl (12340 ) core 0: 2026.67 IO/s 49.34 secs/100000 ios 00:30:51.422 QEMU NVMe Ctrl (12340 ) core 1: 1301.33 IO/s 76.84 secs/100000 ios 00:30:51.422 QEMU NVMe Ctrl (12340 ) core 2: 405.33 IO/s 246.71 secs/100000 ios 00:30:51.422 QEMU NVMe Ctrl (12340 ) core 3: 533.33 IO/s 187.50 secs/100000 ios 00:30:51.422 ======================================================== 00:30:51.422 00:30:51.422 00:30:51.422 real 0m3.516s 00:30:51.422 user 0m9.520s 00:30:51.422 sys 0m0.149s 00:30:51.422 05:49:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:51.422 05:49:54 -- common/autotest_common.sh@10 -- # set +x 00:30:51.422 ************************************ 00:30:51.422 END TEST nvme_arbitration 00:30:51.422 ************************************ 00:30:51.422 05:49:54 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:30:51.422 05:49:54 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:30:51.422 05:49:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:51.422 05:49:54 -- common/autotest_common.sh@10 -- # set +x 00:30:51.422 ************************************ 00:30:51.422 START TEST nvme_single_aen 00:30:51.422 ************************************ 00:30:51.422 05:49:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:30:51.422 [2024-10-07 05:49:54.625460] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:30:51.422 [2024-10-07 05:49:54.625571] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:51.422 [2024-10-07 05:49:54.824920] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:30:51.422 Asynchronous Event Request test 00:30:51.422 Attached to 0000:00:06.0 00:30:51.422 Reset controller to setup AER completions for this process 00:30:51.422 Registering asynchronous event callbacks... 00:30:51.422 Getting orig temperature thresholds of all controllers 00:30:51.422 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:51.422 Setting all controllers temperature threshold low to trigger AER 00:30:51.422 Waiting for all controllers temperature threshold to be set lower 00:30:51.422 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:51.422 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:30:51.422 Waiting for all controllers to trigger AER and reset threshold 00:30:51.422 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:51.422 Cleaning up... 00:30:51.422 00:30:51.422 real 0m0.297s 00:30:51.422 user 0m0.133s 00:30:51.422 sys 0m0.096s 00:30:51.422 05:49:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:51.422 05:49:54 -- common/autotest_common.sh@10 -- # set +x 00:30:51.422 ************************************ 00:30:51.422 END TEST nvme_single_aen 00:30:51.422 ************************************ 00:30:51.423 05:49:54 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:30:51.423 05:49:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:51.423 05:49:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:51.423 05:49:54 -- common/autotest_common.sh@10 -- # set +x 00:30:51.423 ************************************ 00:30:51.423 START TEST nvme_doorbell_aers 00:30:51.423 ************************************ 00:30:51.423 05:49:54 -- common/autotest_common.sh@1104 -- # nvme_doorbell_aers 00:30:51.423 05:49:54 -- nvme/nvme.sh@70 -- # bdfs=() 00:30:51.423 05:49:54 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:30:51.423 05:49:54 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:30:51.423 05:49:54 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:30:51.423 05:49:54 -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:51.423 05:49:54 -- common/autotest_common.sh@1498 -- # local bdfs 00:30:51.423 05:49:54 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:51.423 05:49:54 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:51.423 05:49:54 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:51.423 05:49:54 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:30:51.423 05:49:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:30:51.423 05:49:54 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:30:51.423 05:49:54 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:30:51.423 [2024-10-07 05:49:55.275835] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 183098) is not found. Dropping the request. 00:31:01.388 Executing: test_write_invalid_db 00:31:01.388 Waiting for AER completion... 00:31:01.388 Failure: test_write_invalid_db 00:31:01.388 00:31:01.388 Executing: test_invalid_db_write_overflow_sq 00:31:01.388 Waiting for AER completion... 00:31:01.388 Failure: test_invalid_db_write_overflow_sq 00:31:01.388 00:31:01.388 Executing: test_invalid_db_write_overflow_cq 00:31:01.388 Waiting for AER completion... 00:31:01.388 Failure: test_invalid_db_write_overflow_cq 00:31:01.388 00:31:01.388 00:31:01.388 real 0m10.108s 00:31:01.388 user 0m8.579s 00:31:01.388 sys 0m1.475s 00:31:01.388 05:50:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:01.388 05:50:05 -- common/autotest_common.sh@10 -- # set +x 00:31:01.388 ************************************ 00:31:01.388 END TEST nvme_doorbell_aers 00:31:01.388 ************************************ 00:31:01.388 05:50:05 -- nvme/nvme.sh@97 -- # uname 00:31:01.388 05:50:05 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:31:01.388 05:50:05 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:31:01.388 05:50:05 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:31:01.388 05:50:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:01.388 05:50:05 -- common/autotest_common.sh@10 -- # set +x 00:31:01.388 ************************************ 00:31:01.388 START TEST nvme_multi_aen 00:31:01.388 ************************************ 00:31:01.388 05:50:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:31:01.388 [2024-10-07 05:50:05.128521] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:01.388 [2024-10-07 05:50:05.128619] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:01.388 [2024-10-07 05:50:05.289449] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:31:01.388 [2024-10-07 05:50:05.289495] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 183098) is not found. Dropping the request. 00:31:01.388 [2024-10-07 05:50:05.289578] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 183098) is not found. Dropping the request. 00:31:01.388 [2024-10-07 05:50:05.289604] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 183098) is not found. Dropping the request. 00:31:01.388 [2024-10-07 05:50:05.295737] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:01.388 [2024-10-07 05:50:05.295956] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:01.388 Child process pid: 183291 00:31:01.646 [Child] Asynchronous Event Request test 00:31:01.646 [Child] Attached to 0000:00:06.0 00:31:01.646 [Child] Registering asynchronous event callbacks... 00:31:01.646 [Child] Getting orig temperature thresholds of all controllers 00:31:01.647 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:01.647 [Child] Waiting for all controllers to trigger AER and reset threshold 00:31:01.647 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:01.647 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:01.647 [Child] Cleaning up... 00:31:01.905 Asynchronous Event Request test 00:31:01.905 Attached to 0000:00:06.0 00:31:01.905 Reset controller to setup AER completions for this process 00:31:01.905 Registering asynchronous event callbacks... 00:31:01.905 Getting orig temperature thresholds of all controllers 00:31:01.905 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:01.905 Setting all controllers temperature threshold low to trigger AER 00:31:01.905 Waiting for all controllers temperature threshold to be set lower 00:31:01.905 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:01.905 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:31:01.905 Waiting for all controllers to trigger AER and reset threshold 00:31:01.905 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:01.905 Cleaning up... 00:31:01.905 00:31:01.905 real 0m0.576s 00:31:01.905 user 0m0.176s 00:31:01.905 sys 0m0.246s 00:31:01.905 05:50:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:01.905 05:50:05 -- common/autotest_common.sh@10 -- # set +x 00:31:01.905 ************************************ 00:31:01.905 END TEST nvme_multi_aen 00:31:01.905 ************************************ 00:31:01.905 05:50:05 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:31:01.905 05:50:05 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:31:01.905 05:50:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:01.905 05:50:05 -- common/autotest_common.sh@10 -- # set +x 00:31:01.905 ************************************ 00:31:01.905 START TEST nvme_startup 00:31:01.905 ************************************ 00:31:01.905 05:50:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:31:02.163 Initializing NVMe Controllers 00:31:02.163 Attached to 0000:00:06.0 00:31:02.163 Initialization complete. 00:31:02.163 Time used:205324.500 (us). 00:31:02.163 00:31:02.163 real 0m0.297s 00:31:02.163 user 0m0.104s 00:31:02.163 sys 0m0.126s 00:31:02.163 05:50:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:02.163 05:50:06 -- common/autotest_common.sh@10 -- # set +x 00:31:02.163 ************************************ 00:31:02.163 END TEST nvme_startup 00:31:02.163 ************************************ 00:31:02.163 05:50:06 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:31:02.163 05:50:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:02.163 05:50:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:02.163 05:50:06 -- common/autotest_common.sh@10 -- # set +x 00:31:02.163 ************************************ 00:31:02.163 START TEST nvme_multi_secondary 00:31:02.163 ************************************ 00:31:02.163 05:50:06 -- common/autotest_common.sh@1104 -- # nvme_multi_secondary 00:31:02.163 05:50:06 -- nvme/nvme.sh@52 -- # pid0=183350 00:31:02.163 05:50:06 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:31:02.163 05:50:06 -- nvme/nvme.sh@54 -- # pid1=183351 00:31:02.163 05:50:06 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:31:02.163 05:50:06 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:31:06.347 Initializing NVMe Controllers 00:31:06.347 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:06.348 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:31:06.348 Initialization complete. Launching workers. 00:31:06.348 ======================================================== 00:31:06.348 Latency(us) 00:31:06.348 Device Information : IOPS MiB/s Average min max 00:31:06.348 PCIE (0000:00:06.0) NSID 1 from core 1: 34827.33 136.04 459.06 104.91 1346.62 00:31:06.348 ======================================================== 00:31:06.348 Total : 34827.33 136.04 459.06 104.91 1346.62 00:31:06.348 00:31:06.348 Initializing NVMe Controllers 00:31:06.348 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:06.348 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:31:06.348 Initialization complete. Launching workers. 00:31:06.348 ======================================================== 00:31:06.348 Latency(us) 00:31:06.348 Device Information : IOPS MiB/s Average min max 00:31:06.348 PCIE (0000:00:06.0) NSID 1 from core 2: 15187.23 59.33 1052.34 136.70 16996.35 00:31:06.348 ======================================================== 00:31:06.348 Total : 15187.23 59.33 1052.34 136.70 16996.35 00:31:06.348 00:31:06.348 05:50:09 -- nvme/nvme.sh@56 -- # wait 183350 00:31:07.722 Initializing NVMe Controllers 00:31:07.722 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:07.722 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:31:07.722 Initialization complete. Launching workers. 00:31:07.722 ======================================================== 00:31:07.722 Latency(us) 00:31:07.722 Device Information : IOPS MiB/s Average min max 00:31:07.722 PCIE (0000:00:06.0) NSID 1 from core 0: 40476.19 158.11 394.96 126.52 7562.99 00:31:07.722 ======================================================== 00:31:07.722 Total : 40476.19 158.11 394.96 126.52 7562.99 00:31:07.722 00:31:07.722 05:50:11 -- nvme/nvme.sh@57 -- # wait 183351 00:31:07.722 05:50:11 -- nvme/nvme.sh@61 -- # pid0=183437 00:31:07.722 05:50:11 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:31:07.722 05:50:11 -- nvme/nvme.sh@63 -- # pid1=183438 00:31:07.722 05:50:11 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:31:07.722 05:50:11 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:31:11.006 Initializing NVMe Controllers 00:31:11.006 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:11.006 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:31:11.006 Initialization complete. Launching workers. 00:31:11.006 ======================================================== 00:31:11.006 Latency(us) 00:31:11.006 Device Information : IOPS MiB/s Average min max 00:31:11.006 PCIE (0000:00:06.0) NSID 1 from core 1: 34540.46 134.92 462.89 105.07 1265.18 00:31:11.006 ======================================================== 00:31:11.006 Total : 34540.46 134.92 462.89 105.07 1265.18 00:31:11.006 00:31:11.264 Initializing NVMe Controllers 00:31:11.264 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:11.264 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:31:11.264 Initialization complete. Launching workers. 00:31:11.264 ======================================================== 00:31:11.264 Latency(us) 00:31:11.264 Device Information : IOPS MiB/s Average min max 00:31:11.264 PCIE (0000:00:06.0) NSID 1 from core 0: 34827.00 136.04 459.08 107.38 1606.60 00:31:11.264 ======================================================== 00:31:11.264 Total : 34827.00 136.04 459.08 107.38 1606.60 00:31:11.264 00:31:13.794 Initializing NVMe Controllers 00:31:13.794 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:13.794 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:31:13.794 Initialization complete. Launching workers. 00:31:13.794 ======================================================== 00:31:13.794 Latency(us) 00:31:13.794 Device Information : IOPS MiB/s Average min max 00:31:13.794 PCIE (0000:00:06.0) NSID 1 from core 2: 18119.60 70.78 882.21 125.00 20455.19 00:31:13.794 ======================================================== 00:31:13.794 Total : 18119.60 70.78 882.21 125.00 20455.19 00:31:13.794 00:31:13.794 05:50:17 -- nvme/nvme.sh@65 -- # wait 183437 00:31:13.794 05:50:17 -- nvme/nvme.sh@66 -- # wait 183438 00:31:13.794 00:31:13.794 real 0m11.237s 00:31:13.794 user 0m18.722s 00:31:13.794 sys 0m0.878s 00:31:13.794 05:50:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:13.794 05:50:17 -- common/autotest_common.sh@10 -- # set +x 00:31:13.794 ************************************ 00:31:13.794 END TEST nvme_multi_secondary 00:31:13.794 ************************************ 00:31:13.794 05:50:17 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:31:13.794 05:50:17 -- nvme/nvme.sh@102 -- # kill_stub 00:31:13.794 05:50:17 -- common/autotest_common.sh@1065 -- # [[ -e /proc/182651 ]] 00:31:13.794 05:50:17 -- common/autotest_common.sh@1066 -- # kill 182651 00:31:13.794 05:50:17 -- common/autotest_common.sh@1067 -- # wait 182651 00:31:14.052 [2024-10-07 05:50:17.971066] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 183290) is not found. Dropping the request. 00:31:14.052 [2024-10-07 05:50:17.971191] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 183290) is not found. Dropping the request. 00:31:14.311 [2024-10-07 05:50:17.971250] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 183290) is not found. Dropping the request. 00:31:14.311 [2024-10-07 05:50:17.971301] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 183290) is not found. Dropping the request. 00:31:14.573 05:50:18 -- common/autotest_common.sh@1069 -- # rm -f /var/run/spdk_stub0 00:31:14.573 05:50:18 -- common/autotest_common.sh@1073 -- # echo 2 00:31:14.573 05:50:18 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:31:14.573 05:50:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:14.573 05:50:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:14.573 05:50:18 -- common/autotest_common.sh@10 -- # set +x 00:31:14.573 ************************************ 00:31:14.573 START TEST bdev_nvme_reset_stuck_adm_cmd 00:31:14.573 ************************************ 00:31:14.573 05:50:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:31:14.573 * Looking for test storage... 00:31:14.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:14.573 05:50:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:31:14.573 05:50:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:31:14.573 05:50:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:31:14.573 05:50:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:31:14.573 05:50:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:31:14.573 05:50:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:31:14.573 05:50:18 -- common/autotest_common.sh@1509 -- # bdfs=() 00:31:14.573 05:50:18 -- common/autotest_common.sh@1509 -- # local bdfs 00:31:14.573 05:50:18 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:31:14.834 05:50:18 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:31:14.834 05:50:18 -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:14.834 05:50:18 -- common/autotest_common.sh@1498 -- # local bdfs 00:31:14.834 05:50:18 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:14.834 05:50:18 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:14.834 05:50:18 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:14.834 05:50:18 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:14.834 05:50:18 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:31:14.834 05:50:18 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:31:14.834 05:50:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:31:14.834 05:50:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:31:14.834 05:50:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=183599 00:31:14.834 05:50:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:14.834 05:50:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 183599 00:31:14.834 05:50:18 -- common/autotest_common.sh@819 -- # '[' -z 183599 ']' 00:31:14.834 05:50:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:14.834 05:50:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:14.834 05:50:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:31:14.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:14.834 05:50:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:14.834 05:50:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:14.834 05:50:18 -- common/autotest_common.sh@10 -- # set +x 00:31:14.834 [2024-10-07 05:50:18.702415] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:14.834 [2024-10-07 05:50:18.703536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid183599 ] 00:31:15.093 [2024-10-07 05:50:18.919762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:15.352 [2024-10-07 05:50:19.174235] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:15.352 [2024-10-07 05:50:19.174654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.352 [2024-10-07 05:50:19.174784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:15.352 [2024-10-07 05:50:19.176272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.352 [2024-10-07 05:50:19.176237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:16.726 05:50:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:16.726 05:50:20 -- common/autotest_common.sh@852 -- # return 0 00:31:16.726 05:50:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:31:16.726 05:50:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.726 05:50:20 -- common/autotest_common.sh@10 -- # set +x 00:31:16.726 nvme0n1 00:31:16.726 05:50:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.726 05:50:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:31:16.726 05:50:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_8ph55.txt 00:31:16.726 05:50:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:31:16.726 05:50:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.726 05:50:20 -- common/autotest_common.sh@10 -- # set +x 00:31:16.726 true 00:31:16.726 05:50:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.726 05:50:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:31:16.726 05:50:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1728280220 00:31:16.726 05:50:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:31:16.726 05:50:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=183627 00:31:16.726 05:50:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:16.726 05:50:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:31:18.675 05:50:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:18.675 05:50:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.675 05:50:22 -- common/autotest_common.sh@10 -- # set +x 00:31:18.675 [2024-10-07 05:50:22.463008] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:31:18.675 [2024-10-07 05:50:22.463839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:18.676 [2024-10-07 05:50:22.464046] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:31:18.676 [2024-10-07 05:50:22.464182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.676 [2024-10-07 05:50:22.466092] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:18.676 05:50:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.676 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 183627 00:31:18.676 05:50:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 183627 00:31:18.676 05:50:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 183627 00:31:18.676 05:50:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:31:18.676 05:50:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:31:18.676 05:50:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:18.676 05:50:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.676 05:50:22 -- common/autotest_common.sh@10 -- # set +x 00:31:18.676 05:50:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.676 05:50:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:31:18.676 05:50:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_8ph55.txt 00:31:18.676 05:50:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:31:18.676 05:50:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:31:18.676 05:50:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:31:18.676 05:50:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:31:18.676 05:50:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:31:18.676 05:50:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:31:18.676 05:50:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:31:18.676 05:50:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:31:18.676 05:50:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:31:18.676 05:50:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:31:18.676 05:50:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:31:18.676 05:50:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:31:18.676 05:50:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:31:18.676 05:50:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:31:18.676 05:50:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:31:18.676 05:50:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:31:18.676 05:50:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:31:18.676 05:50:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:31:18.676 05:50:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:31:18.676 05:50:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_8ph55.txt 00:31:18.676 05:50:22 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 183599 00:31:18.676 05:50:22 -- common/autotest_common.sh@926 -- # '[' -z 183599 ']' 00:31:18.676 05:50:22 -- common/autotest_common.sh@930 -- # kill -0 183599 00:31:18.676 05:50:22 -- common/autotest_common.sh@931 -- # uname 00:31:18.676 05:50:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:18.676 05:50:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 183599 00:31:18.676 05:50:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:18.676 05:50:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:18.676 05:50:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 183599' 00:31:18.676 killing process with pid 183599 00:31:18.676 05:50:22 -- common/autotest_common.sh@945 -- # kill 183599 00:31:18.676 05:50:22 -- common/autotest_common.sh@950 -- # wait 183599 00:31:21.211 05:50:24 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:31:21.211 05:50:24 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:31:21.211 00:31:21.211 real 0m6.107s 00:31:21.211 user 0m21.366s 00:31:21.211 sys 0m0.797s 00:31:21.211 05:50:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:21.211 05:50:24 -- common/autotest_common.sh@10 -- # set +x 00:31:21.211 ************************************ 00:31:21.211 END TEST bdev_nvme_reset_stuck_adm_cmd 00:31:21.211 ************************************ 00:31:21.211 05:50:24 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:31:21.211 05:50:24 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:31:21.211 05:50:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:21.211 05:50:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:21.211 05:50:24 -- common/autotest_common.sh@10 -- # set +x 00:31:21.211 ************************************ 00:31:21.211 START TEST nvme_fio 00:31:21.211 ************************************ 00:31:21.211 05:50:24 -- common/autotest_common.sh@1104 -- # nvme_fio_test 00:31:21.211 05:50:24 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:31:21.211 05:50:24 -- nvme/nvme.sh@32 -- # ran_fio=false 00:31:21.211 05:50:24 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:31:21.211 05:50:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:21.211 05:50:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:31:21.211 05:50:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:21.211 05:50:24 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:21.211 05:50:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:21.211 05:50:24 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:21.211 05:50:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:31:21.211 05:50:24 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0') 00:31:21.211 05:50:24 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:31:21.211 05:50:24 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:31:21.211 05:50:24 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:31:21.211 05:50:24 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:31:21.211 05:50:24 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:31:21.211 05:50:24 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:31:21.211 05:50:25 -- nvme/nvme.sh@41 -- # bs=4096 00:31:21.211 05:50:25 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:31:21.211 05:50:25 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:31:21.211 05:50:25 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:21.211 05:50:25 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:21.211 05:50:25 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:21.211 05:50:25 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:21.211 05:50:25 -- common/autotest_common.sh@1320 -- # shift 00:31:21.211 05:50:25 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:21.211 05:50:25 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:21.211 05:50:25 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:21.211 05:50:25 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:21.211 05:50:25 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:21.211 05:50:25 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:31:21.211 05:50:25 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:31:21.211 05:50:25 -- common/autotest_common.sh@1326 -- # break 00:31:21.211 05:50:25 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:31:21.211 05:50:25 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:31:21.470 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:21.470 fio-3.35 00:31:21.470 Starting 1 thread 00:31:24.757 00:31:24.757 test: (groupid=0, jobs=1): err= 0: pid=183772: Mon Oct 7 05:50:28 2024 00:31:24.757 read: IOPS=14.0k, BW=54.9MiB/s (57.5MB/s)(110MiB/2001msec) 00:31:24.757 slat (usec): min=3, max=112, avg= 6.69, stdev= 3.86 00:31:24.757 clat (usec): min=232, max=11566, avg=4528.89, stdev=390.00 00:31:24.757 lat (usec): min=238, max=11679, avg=4535.58, stdev=390.44 00:31:24.757 clat percentiles (usec): 00:31:24.757 | 1.00th=[ 3949], 5.00th=[ 4146], 10.00th=[ 4228], 20.00th=[ 4359], 00:31:24.757 | 30.00th=[ 4424], 40.00th=[ 4424], 50.00th=[ 4490], 60.00th=[ 4555], 00:31:24.757 | 70.00th=[ 4621], 80.00th=[ 4686], 90.00th=[ 4752], 95.00th=[ 4948], 00:31:24.757 | 99.00th=[ 5669], 99.50th=[ 6783], 99.90th=[ 8717], 99.95th=[ 9765], 00:31:24.757 | 99.99th=[11207] 00:31:24.757 bw ( KiB/s): min=54744, max=56848, per=99.37%, avg=55826.67, stdev=1053.34, samples=3 00:31:24.757 iops : min=13686, max=14212, avg=13956.67, stdev=263.34, samples=3 00:31:24.757 write: IOPS=14.1k, BW=54.9MiB/s (57.6MB/s)(110MiB/2001msec); 0 zone resets 00:31:24.757 slat (nsec): min=3745, max=49834, avg=6938.22, stdev=3843.53 00:31:24.757 clat (usec): min=350, max=11180, avg=4545.87, stdev=409.68 00:31:24.757 lat (usec): min=356, max=11199, avg=4552.80, stdev=410.08 00:31:24.757 clat percentiles (usec): 00:31:24.757 | 1.00th=[ 3982], 5.00th=[ 4146], 10.00th=[ 4228], 20.00th=[ 4359], 00:31:24.757 | 30.00th=[ 4424], 40.00th=[ 4490], 50.00th=[ 4490], 60.00th=[ 4555], 00:31:24.757 | 70.00th=[ 4621], 80.00th=[ 4686], 90.00th=[ 4752], 95.00th=[ 4948], 00:31:24.757 | 99.00th=[ 5800], 99.50th=[ 6980], 99.90th=[ 8848], 99.95th=[ 9765], 00:31:24.757 | 99.99th=[11076] 00:31:24.757 bw ( KiB/s): min=55080, max=56608, per=99.36%, avg=55856.00, stdev=764.28, samples=3 00:31:24.757 iops : min=13770, max=14152, avg=13964.00, stdev=191.07, samples=3 00:31:24.757 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:31:24.757 lat (msec) : 2=0.04%, 4=1.45%, 10=98.42%, 20=0.05% 00:31:24.757 cpu : usr=99.90%, sys=0.05%, ctx=4, majf=0, minf=36 00:31:24.757 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:24.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.757 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:24.757 issued rwts: total=28104,28121,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.757 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:24.757 00:31:24.757 Run status group 0 (all jobs): 00:31:24.757 READ: bw=54.9MiB/s (57.5MB/s), 54.9MiB/s-54.9MiB/s (57.5MB/s-57.5MB/s), io=110MiB (115MB), run=2001-2001msec 00:31:24.757 WRITE: bw=54.9MiB/s (57.6MB/s), 54.9MiB/s-54.9MiB/s (57.6MB/s-57.6MB/s), io=110MiB (115MB), run=2001-2001msec 00:31:24.757 ----------------------------------------------------- 00:31:24.757 Suppressions used: 00:31:24.757 count bytes template 00:31:24.757 1 32 /usr/src/fio/parse.c 00:31:24.757 ----------------------------------------------------- 00:31:24.757 00:31:24.757 05:50:28 -- nvme/nvme.sh@44 -- # ran_fio=true 00:31:24.757 05:50:28 -- nvme/nvme.sh@46 -- # true 00:31:24.757 00:31:24.757 real 0m4.015s 00:31:24.757 user 0m3.265s 00:31:24.757 sys 0m0.432s 00:31:24.757 05:50:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:24.757 ************************************ 00:31:24.757 END TEST nvme_fio 00:31:24.757 05:50:28 -- common/autotest_common.sh@10 -- # set +x 00:31:24.757 ************************************ 00:31:24.757 00:31:24.757 real 0m49.044s 00:31:24.757 user 2m8.266s 00:31:24.757 sys 0m9.217s 00:31:24.757 05:50:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:24.757 ************************************ 00:31:24.757 END TEST nvme 00:31:24.757 05:50:28 -- common/autotest_common.sh@10 -- # set +x 00:31:24.757 ************************************ 00:31:24.757 05:50:28 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:31:24.757 05:50:28 -- spdk/autotest.sh@227 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:31:24.757 05:50:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:24.757 05:50:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:24.757 05:50:28 -- common/autotest_common.sh@10 -- # set +x 00:31:25.016 ************************************ 00:31:25.016 START TEST nvme_scc 00:31:25.016 ************************************ 00:31:25.016 05:50:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:31:25.016 * Looking for test storage... 00:31:25.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:25.016 05:50:28 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:31:25.016 05:50:28 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:31:25.016 05:50:28 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:31:25.016 05:50:28 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:25.016 05:50:28 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:25.016 05:50:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:25.016 05:50:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:25.016 05:50:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:25.016 05:50:28 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:25.016 05:50:28 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:25.016 05:50:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:25.016 05:50:28 -- paths/export.sh@5 -- # export PATH 00:31:25.016 05:50:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:25.016 05:50:28 -- nvme/functions.sh@10 -- # ctrls=() 00:31:25.016 05:50:28 -- nvme/functions.sh@10 -- # declare -A ctrls 00:31:25.016 05:50:28 -- nvme/functions.sh@11 -- # nvmes=() 00:31:25.016 05:50:28 -- nvme/functions.sh@11 -- # declare -A nvmes 00:31:25.016 05:50:28 -- nvme/functions.sh@12 -- # bdfs=() 00:31:25.016 05:50:28 -- nvme/functions.sh@12 -- # declare -A bdfs 00:31:25.016 05:50:28 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:31:25.016 05:50:28 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:31:25.016 05:50:28 -- nvme/functions.sh@14 -- # nvme_name= 00:31:25.016 05:50:28 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:25.016 05:50:28 -- nvme/nvme_scc.sh@12 -- # uname 00:31:25.016 05:50:28 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:31:25.016 05:50:28 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:31:25.016 05:50:28 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:25.275 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:31:25.275 Waiting for block devices as requested 00:31:25.536 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:31:25.536 05:50:29 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:31:25.536 05:50:29 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:31:25.536 05:50:29 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:31:25.536 05:50:29 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:31:25.536 05:50:29 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:31:25.536 05:50:29 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:31:25.536 05:50:29 -- scripts/common.sh@15 -- # local i 00:31:25.536 05:50:29 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:31:25.536 05:50:29 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:31:25.536 05:50:29 -- scripts/common.sh@24 -- # return 0 00:31:25.536 05:50:29 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:31:25.536 05:50:29 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:31:25.536 05:50:29 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:31:25.536 05:50:29 -- nvme/functions.sh@18 -- # shift 00:31:25.536 05:50:29 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.536 05:50:29 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:31:25.536 05:50:29 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.536 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.536 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.536 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.536 05:50:29 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.536 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.536 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.536 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.536 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.536 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.536 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.536 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.536 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.536 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.536 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.536 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.536 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.536 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.536 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.536 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:31:25.536 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.537 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.537 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.537 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.537 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.537 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.537 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.537 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.537 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.537 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.537 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.537 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.537 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.537 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.537 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.537 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.537 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.537 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.537 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.537 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.537 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.537 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.537 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.537 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.537 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.537 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.537 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.537 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.537 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.537 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.537 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.537 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.537 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.538 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.538 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.538 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.538 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.538 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.538 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.538 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.538 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.538 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.538 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.538 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.538 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.538 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.538 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.538 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.538 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.538 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.538 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.538 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.538 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.538 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.538 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.538 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.538 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.538 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.538 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.538 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.538 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.538 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.538 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.538 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:31:25.538 05:50:29 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.539 05:50:29 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.539 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.539 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.539 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.539 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.539 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.539 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.539 05:50:29 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.539 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.539 05:50:29 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.539 05:50:29 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:31:25.539 05:50:29 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:31:25.539 05:50:29 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:31:25.539 05:50:29 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:31:25.539 05:50:29 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:31:25.539 05:50:29 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:31:25.539 05:50:29 -- nvme/functions.sh@18 -- # shift 00:31:25.539 05:50:29 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.539 05:50:29 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:31:25.539 05:50:29 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.539 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.539 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.539 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.539 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.539 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.539 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.539 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.539 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.539 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.539 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.539 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.539 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.539 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.539 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.539 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.539 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:31:25.539 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.539 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.540 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.540 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.540 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.540 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.540 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.540 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.540 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.540 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.540 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.540 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.540 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.540 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.540 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.540 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.540 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.540 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.540 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.540 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.540 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.540 05:50:29 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.540 05:50:29 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.540 05:50:29 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.540 05:50:29 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.540 05:50:29 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.540 05:50:29 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.540 05:50:29 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.540 05:50:29 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.540 05:50:29 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:31:25.540 05:50:29 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # IFS=: 00:31:25.540 05:50:29 -- nvme/functions.sh@21 -- # read -r reg val 00:31:25.540 05:50:29 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:31:25.540 05:50:29 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:31:25.540 05:50:29 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:31:25.540 05:50:29 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:31:25.540 05:50:29 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:31:25.540 05:50:29 -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:31:25.540 05:50:29 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:31:25.541 05:50:29 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:31:25.541 05:50:29 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:31:25.541 05:50:29 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:31:25.541 05:50:29 -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:31:25.541 05:50:29 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:31:25.541 05:50:29 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:31:25.541 05:50:29 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:31:25.541 05:50:29 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:31:25.541 05:50:29 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:31:25.541 05:50:29 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:31:25.541 05:50:29 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:31:25.541 05:50:29 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:31:25.541 05:50:29 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:31:25.541 05:50:29 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:31:25.541 05:50:29 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:31:25.541 05:50:29 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:31:25.541 05:50:29 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:31:25.541 05:50:29 -- nvme/functions.sh@76 -- # echo 0x15d 00:31:25.541 05:50:29 -- nvme/functions.sh@184 -- # oncs=0x15d 00:31:25.541 05:50:29 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:31:25.541 05:50:29 -- nvme/functions.sh@197 -- # echo nvme0 00:31:25.541 05:50:29 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:31:25.541 05:50:29 -- nvme/functions.sh@206 -- # echo nvme0 00:31:25.541 05:50:29 -- nvme/functions.sh@207 -- # return 0 00:31:25.541 05:50:29 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:31:25.541 05:50:29 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:06.0 00:31:25.541 05:50:29 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:26.109 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:31:26.109 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:31:27.045 05:50:31 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:31:27.045 05:50:31 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:31:27.045 05:50:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:27.045 05:50:31 -- common/autotest_common.sh@10 -- # set +x 00:31:27.304 ************************************ 00:31:27.304 START TEST nvme_simple_copy 00:31:27.304 ************************************ 00:31:27.304 05:50:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:31:27.563 Initializing NVMe Controllers 00:31:27.563 Attaching to 0000:00:06.0 00:31:27.563 Controller supports SCC. Attached to 0000:00:06.0 00:31:27.563 Namespace ID: 1 size: 5GB 00:31:27.563 Initialization complete. 00:31:27.563 00:31:27.563 Controller QEMU NVMe Ctrl (12340 ) 00:31:27.563 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:31:27.563 Namespace Block Size:4096 00:31:27.563 Writing LBAs 0 to 63 with Random Data 00:31:27.563 Copied LBAs from 0 - 63 to the Destination LBA 256 00:31:27.563 LBAs matching Written Data: 64 00:31:27.563 00:31:27.563 real 0m0.323s 00:31:27.563 user 0m0.113s 00:31:27.563 sys 0m0.111s 00:31:27.563 05:50:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:27.563 05:50:31 -- common/autotest_common.sh@10 -- # set +x 00:31:27.563 ************************************ 00:31:27.563 END TEST nvme_simple_copy 00:31:27.563 ************************************ 00:31:27.563 00:31:27.563 real 0m2.651s 00:31:27.563 user 0m0.794s 00:31:27.563 sys 0m1.743s 00:31:27.563 05:50:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:27.563 05:50:31 -- common/autotest_common.sh@10 -- # set +x 00:31:27.563 ************************************ 00:31:27.563 END TEST nvme_scc 00:31:27.563 ************************************ 00:31:27.563 05:50:31 -- spdk/autotest.sh@229 -- # [[ 0 -eq 1 ]] 00:31:27.563 05:50:31 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:31:27.563 05:50:31 -- spdk/autotest.sh@235 -- # [[ '' -eq 1 ]] 00:31:27.563 05:50:31 -- spdk/autotest.sh@238 -- # [[ 0 -eq 1 ]] 00:31:27.563 05:50:31 -- spdk/autotest.sh@242 -- # [[ '' -eq 1 ]] 00:31:27.563 05:50:31 -- spdk/autotest.sh@246 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:31:27.563 05:50:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:27.563 05:50:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:27.563 05:50:31 -- common/autotest_common.sh@10 -- # set +x 00:31:27.563 ************************************ 00:31:27.563 START TEST nvme_rpc 00:31:27.563 ************************************ 00:31:27.563 05:50:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:31:27.563 * Looking for test storage... 00:31:27.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:27.822 05:50:31 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:27.822 05:50:31 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:31:27.822 05:50:31 -- common/autotest_common.sh@1509 -- # bdfs=() 00:31:27.822 05:50:31 -- common/autotest_common.sh@1509 -- # local bdfs 00:31:27.822 05:50:31 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:31:27.822 05:50:31 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:31:27.822 05:50:31 -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:27.822 05:50:31 -- common/autotest_common.sh@1498 -- # local bdfs 00:31:27.822 05:50:31 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:27.822 05:50:31 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:27.822 05:50:31 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:27.822 05:50:31 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:27.822 05:50:31 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:31:27.822 05:50:31 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:31:27.822 05:50:31 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:31:27.822 05:50:31 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=184258 00:31:27.822 05:50:31 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:31:27.822 05:50:31 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:31:27.822 05:50:31 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 184258 00:31:27.822 05:50:31 -- common/autotest_common.sh@819 -- # '[' -z 184258 ']' 00:31:27.822 05:50:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:27.822 05:50:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:27.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:27.822 05:50:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:27.822 05:50:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:27.822 05:50:31 -- common/autotest_common.sh@10 -- # set +x 00:31:27.822 [2024-10-07 05:50:31.670722] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:27.822 [2024-10-07 05:50:31.671371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid184258 ] 00:31:28.081 [2024-10-07 05:50:31.831803] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:28.340 [2024-10-07 05:50:32.098518] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:28.340 [2024-10-07 05:50:32.098992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:28.340 [2024-10-07 05:50:32.099017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:29.720 05:50:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:29.720 05:50:33 -- common/autotest_common.sh@852 -- # return 0 00:31:29.720 05:50:33 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:31:29.720 Nvme0n1 00:31:29.720 05:50:33 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:31:29.720 05:50:33 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:31:29.979 request: 00:31:29.979 { 00:31:29.979 "filename": "non_existing_file", 00:31:29.979 "bdev_name": "Nvme0n1", 00:31:29.979 "method": "bdev_nvme_apply_firmware", 00:31:29.979 "req_id": 1 00:31:29.979 } 00:31:29.979 Got JSON-RPC error response 00:31:29.979 response: 00:31:29.979 { 00:31:29.979 "code": -32603, 00:31:29.979 "message": "open file failed." 00:31:29.979 } 00:31:29.979 05:50:33 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:31:29.979 05:50:33 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:31:29.979 05:50:33 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:30.237 05:50:34 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:31:30.238 05:50:34 -- nvme/nvme_rpc.sh@40 -- # killprocess 184258 00:31:30.238 05:50:34 -- common/autotest_common.sh@926 -- # '[' -z 184258 ']' 00:31:30.238 05:50:34 -- common/autotest_common.sh@930 -- # kill -0 184258 00:31:30.238 05:50:34 -- common/autotest_common.sh@931 -- # uname 00:31:30.238 05:50:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:30.238 05:50:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 184258 00:31:30.238 05:50:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:30.238 05:50:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:30.238 killing process with pid 184258 00:31:30.238 05:50:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 184258' 00:31:30.238 05:50:34 -- common/autotest_common.sh@945 -- # kill 184258 00:31:30.238 05:50:34 -- common/autotest_common.sh@950 -- # wait 184258 00:31:32.139 00:31:32.139 real 0m4.506s 00:31:32.139 user 0m8.633s 00:31:32.139 sys 0m0.744s 00:31:32.139 05:50:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:32.139 05:50:35 -- common/autotest_common.sh@10 -- # set +x 00:31:32.139 ************************************ 00:31:32.139 END TEST nvme_rpc 00:31:32.139 ************************************ 00:31:32.139 05:50:36 -- spdk/autotest.sh@247 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:31:32.139 05:50:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:32.139 05:50:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:32.139 05:50:36 -- common/autotest_common.sh@10 -- # set +x 00:31:32.139 ************************************ 00:31:32.139 START TEST nvme_rpc_timeouts 00:31:32.139 ************************************ 00:31:32.139 05:50:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:31:32.139 * Looking for test storage... 00:31:32.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:32.139 05:50:36 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:32.139 05:50:36 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_184348 00:31:32.139 05:50:36 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_184348 00:31:32.139 05:50:36 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=184378 00:31:32.139 05:50:36 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:31:32.139 05:50:36 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:31:32.139 05:50:36 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 184378 00:31:32.139 05:50:36 -- common/autotest_common.sh@819 -- # '[' -z 184378 ']' 00:31:32.139 05:50:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:32.139 05:50:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:32.139 05:50:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:32.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:32.139 05:50:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:32.139 05:50:36 -- common/autotest_common.sh@10 -- # set +x 00:31:32.398 [2024-10-07 05:50:36.175188] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:32.398 [2024-10-07 05:50:36.175638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid184378 ] 00:31:32.398 [2024-10-07 05:50:36.344797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:32.657 [2024-10-07 05:50:36.537670] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:32.657 [2024-10-07 05:50:36.538049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:32.657 [2024-10-07 05:50:36.538062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:34.034 05:50:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:34.034 05:50:37 -- common/autotest_common.sh@852 -- # return 0 00:31:34.034 Checking default timeout settings: 00:31:34.034 05:50:37 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:31:34.034 05:50:37 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:31:34.292 Making settings changes with rpc: 00:31:34.292 05:50:38 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:31:34.292 05:50:38 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:31:34.555 Check default vs. modified settings: 00:31:34.555 05:50:38 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:31:34.555 05:50:38 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:31:34.817 05:50:38 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:31:34.817 05:50:38 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:31:34.817 05:50:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_184348 00:31:34.817 05:50:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:31:34.817 05:50:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:34.817 05:50:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:31:34.817 05:50:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_184348 00:31:34.817 05:50:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:31:34.817 05:50:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:34.817 05:50:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:31:34.817 05:50:38 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:31:34.817 Setting action_on_timeout is changed as expected. 00:31:34.817 05:50:38 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:31:34.817 05:50:38 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:31:34.817 05:50:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:31:34.817 05:50:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_184348 00:31:34.817 05:50:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:34.817 05:50:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:31:34.817 05:50:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_184348 00:31:34.817 05:50:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:31:34.817 05:50:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:34.817 05:50:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:31:34.817 Setting timeout_us is changed as expected. 00:31:34.817 05:50:38 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:31:34.817 05:50:38 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:31:34.817 05:50:38 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:31:34.817 05:50:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_184348 00:31:34.817 05:50:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:34.817 05:50:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:31:34.817 05:50:38 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:31:34.817 05:50:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_184348 00:31:34.817 05:50:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:31:34.818 05:50:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:34.818 05:50:38 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:31:34.818 05:50:38 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:31:34.818 05:50:38 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:31:34.818 Setting timeout_admin_us is changed as expected. 00:31:34.818 05:50:38 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:31:34.818 05:50:38 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_184348 /tmp/settings_modified_184348 00:31:34.818 05:50:38 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 184378 00:31:34.818 05:50:38 -- common/autotest_common.sh@926 -- # '[' -z 184378 ']' 00:31:34.818 05:50:38 -- common/autotest_common.sh@930 -- # kill -0 184378 00:31:34.818 05:50:38 -- common/autotest_common.sh@931 -- # uname 00:31:34.818 05:50:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:34.818 05:50:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 184378 00:31:35.076 05:50:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:35.076 05:50:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:35.076 killing process with pid 184378 00:31:35.076 05:50:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 184378' 00:31:35.076 05:50:38 -- common/autotest_common.sh@945 -- # kill 184378 00:31:35.076 05:50:38 -- common/autotest_common.sh@950 -- # wait 184378 00:31:36.982 RPC TIMEOUT SETTING TEST PASSED. 00:31:36.982 05:50:40 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:31:36.982 00:31:36.982 real 0m4.699s 00:31:36.982 user 0m9.212s 00:31:36.982 sys 0m0.721s 00:31:36.982 05:50:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:36.982 05:50:40 -- common/autotest_common.sh@10 -- # set +x 00:31:36.982 ************************************ 00:31:36.982 END TEST nvme_rpc_timeouts 00:31:36.982 ************************************ 00:31:36.982 05:50:40 -- spdk/autotest.sh@251 -- # '[' 1 -eq 0 ']' 00:31:36.982 05:50:40 -- spdk/autotest.sh@255 -- # [[ 0 -eq 1 ]] 00:31:36.982 05:50:40 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:31:36.982 05:50:40 -- spdk/autotest.sh@268 -- # timing_exit lib 00:31:36.982 05:50:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:36.982 05:50:40 -- common/autotest_common.sh@10 -- # set +x 00:31:36.982 05:50:40 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:31:36.982 05:50:40 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:31:36.982 05:50:40 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:31:36.982 05:50:40 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:31:36.982 05:50:40 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:31:36.982 05:50:40 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:31:36.982 05:50:40 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:31:36.982 05:50:40 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:31:36.982 05:50:40 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:31:36.982 05:50:40 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:31:36.982 05:50:40 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:36.982 05:50:40 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:31:36.982 05:50:40 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:31:36.982 05:50:40 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:31:36.982 05:50:40 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:31:36.982 05:50:40 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:31:36.982 05:50:40 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:31:36.982 05:50:40 -- spdk/autotest.sh@378 -- # [[ 1 -eq 1 ]] 00:31:36.982 05:50:40 -- spdk/autotest.sh@379 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:31:36.982 05:50:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:36.982 05:50:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:36.982 05:50:40 -- common/autotest_common.sh@10 -- # set +x 00:31:36.982 ************************************ 00:31:36.982 START TEST blockdev_raid5f 00:31:36.982 ************************************ 00:31:36.982 05:50:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:31:36.982 * Looking for test storage... 00:31:36.982 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:31:36.982 05:50:40 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:31:36.982 05:50:40 -- bdev/nbd_common.sh@6 -- # set -e 00:31:36.982 05:50:40 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:31:36.982 05:50:40 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:36.982 05:50:40 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:31:36.982 05:50:40 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:31:36.982 05:50:40 -- bdev/blockdev.sh@18 -- # : 00:31:36.982 05:50:40 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:31:36.982 05:50:40 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:31:36.982 05:50:40 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:31:36.982 05:50:40 -- bdev/blockdev.sh@672 -- # uname -s 00:31:36.982 05:50:40 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:31:36.982 05:50:40 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:31:36.982 05:50:40 -- bdev/blockdev.sh@680 -- # test_type=raid5f 00:31:36.982 05:50:40 -- bdev/blockdev.sh@681 -- # crypto_device= 00:31:36.982 05:50:40 -- bdev/blockdev.sh@682 -- # dek= 00:31:36.982 05:50:40 -- bdev/blockdev.sh@683 -- # env_ctx= 00:31:36.982 05:50:40 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:31:36.982 05:50:40 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:31:36.982 05:50:40 -- bdev/blockdev.sh@688 -- # [[ raid5f == bdev ]] 00:31:36.982 05:50:40 -- bdev/blockdev.sh@688 -- # [[ raid5f == crypto_* ]] 00:31:36.982 05:50:40 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:31:36.982 05:50:40 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=184529 00:31:36.982 05:50:40 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:36.982 05:50:40 -- bdev/blockdev.sh@47 -- # waitforlisten 184529 00:31:36.982 05:50:40 -- common/autotest_common.sh@819 -- # '[' -z 184529 ']' 00:31:36.982 05:50:40 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:31:36.982 05:50:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:36.982 05:50:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:36.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:36.982 05:50:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:36.982 05:50:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:36.982 05:50:40 -- common/autotest_common.sh@10 -- # set +x 00:31:37.241 [2024-10-07 05:50:40.997230] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:37.241 [2024-10-07 05:50:40.997462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid184529 ] 00:31:37.241 [2024-10-07 05:50:41.173005] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.501 [2024-10-07 05:50:41.426863] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:37.501 [2024-10-07 05:50:41.427085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.880 05:50:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:38.880 05:50:42 -- common/autotest_common.sh@852 -- # return 0 00:31:38.880 05:50:42 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:31:38.880 05:50:42 -- bdev/blockdev.sh@724 -- # setup_raid5f_conf 00:31:38.880 05:50:42 -- bdev/blockdev.sh@278 -- # rpc_cmd 00:31:38.880 05:50:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:38.880 05:50:42 -- common/autotest_common.sh@10 -- # set +x 00:31:38.880 Malloc0 00:31:38.880 Malloc1 00:31:38.880 Malloc2 00:31:38.880 05:50:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:38.880 05:50:42 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:31:38.880 05:50:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:38.880 05:50:42 -- common/autotest_common.sh@10 -- # set +x 00:31:38.880 05:50:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:38.880 05:50:42 -- bdev/blockdev.sh@738 -- # cat 00:31:38.880 05:50:42 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:31:38.880 05:50:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:38.880 05:50:42 -- common/autotest_common.sh@10 -- # set +x 00:31:38.880 05:50:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:38.880 05:50:42 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:31:38.880 05:50:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:38.880 05:50:42 -- common/autotest_common.sh@10 -- # set +x 00:31:38.880 05:50:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:38.880 05:50:42 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:31:38.880 05:50:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:38.880 05:50:42 -- common/autotest_common.sh@10 -- # set +x 00:31:39.140 05:50:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:39.140 05:50:42 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:31:39.140 05:50:42 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:31:39.140 05:50:42 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:31:39.140 05:50:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:39.140 05:50:42 -- common/autotest_common.sh@10 -- # set +x 00:31:39.140 05:50:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:39.140 05:50:42 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:31:39.140 05:50:42 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "a15af071-2ffa-4f48-8338-da2f158665e5"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "a15af071-2ffa-4f48-8338-da2f158665e5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "a15af071-2ffa-4f48-8338-da2f158665e5",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "6c577815-d5d8-4dc6-a91d-0909b5c3fdbc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "e93f8e3e-d78b-49a5-a113-70ac054671f3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "6bd1e811-ca64-419c-a639-2d5f67538de3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:31:39.140 05:50:42 -- bdev/blockdev.sh@747 -- # jq -r .name 00:31:39.140 05:50:42 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:31:39.140 05:50:42 -- bdev/blockdev.sh@750 -- # hello_world_bdev=raid5f 00:31:39.140 05:50:42 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:31:39.140 05:50:42 -- bdev/blockdev.sh@752 -- # killprocess 184529 00:31:39.140 05:50:42 -- common/autotest_common.sh@926 -- # '[' -z 184529 ']' 00:31:39.140 05:50:42 -- common/autotest_common.sh@930 -- # kill -0 184529 00:31:39.140 05:50:42 -- common/autotest_common.sh@931 -- # uname 00:31:39.140 05:50:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:39.140 05:50:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 184529 00:31:39.140 05:50:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:39.140 05:50:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:39.140 killing process with pid 184529 00:31:39.140 05:50:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 184529' 00:31:39.140 05:50:42 -- common/autotest_common.sh@945 -- # kill 184529 00:31:39.140 05:50:42 -- common/autotest_common.sh@950 -- # wait 184529 00:31:41.701 05:50:45 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:41.701 05:50:45 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:31:41.701 05:50:45 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:31:41.701 05:50:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:41.701 05:50:45 -- common/autotest_common.sh@10 -- # set +x 00:31:41.701 ************************************ 00:31:41.701 START TEST bdev_hello_world 00:31:41.701 ************************************ 00:31:41.701 05:50:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:31:41.701 [2024-10-07 05:50:45.210813] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:41.701 [2024-10-07 05:50:45.211030] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid184615 ] 00:31:41.701 [2024-10-07 05:50:45.380687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.701 [2024-10-07 05:50:45.570142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.268 [2024-10-07 05:50:46.069164] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:31:42.268 [2024-10-07 05:50:46.069269] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:31:42.268 [2024-10-07 05:50:46.069302] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:31:42.268 [2024-10-07 05:50:46.069871] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:31:42.268 [2024-10-07 05:50:46.070094] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:31:42.268 [2024-10-07 05:50:46.070145] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:31:42.268 [2024-10-07 05:50:46.070225] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:31:42.268 00:31:42.268 [2024-10-07 05:50:46.070268] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:31:43.644 00:31:43.644 real 0m2.142s 00:31:43.644 user 0m1.675s 00:31:43.644 sys 0m0.344s 00:31:43.644 05:50:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:43.644 05:50:47 -- common/autotest_common.sh@10 -- # set +x 00:31:43.644 ************************************ 00:31:43.644 END TEST bdev_hello_world 00:31:43.644 ************************************ 00:31:43.644 05:50:47 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:31:43.644 05:50:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:43.644 05:50:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:43.644 05:50:47 -- common/autotest_common.sh@10 -- # set +x 00:31:43.644 ************************************ 00:31:43.644 START TEST bdev_bounds 00:31:43.644 ************************************ 00:31:43.644 05:50:47 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:31:43.644 05:50:47 -- bdev/blockdev.sh@288 -- # bdevio_pid=184660 00:31:43.644 05:50:47 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:31:43.644 05:50:47 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:31:43.645 Process bdevio pid: 184660 00:31:43.645 05:50:47 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 184660' 00:31:43.645 05:50:47 -- bdev/blockdev.sh@291 -- # waitforlisten 184660 00:31:43.645 05:50:47 -- common/autotest_common.sh@819 -- # '[' -z 184660 ']' 00:31:43.645 05:50:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:43.645 05:50:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:43.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:43.645 05:50:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:43.645 05:50:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:43.645 05:50:47 -- common/autotest_common.sh@10 -- # set +x 00:31:43.645 [2024-10-07 05:50:47.415092] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:43.645 [2024-10-07 05:50:47.415314] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid184660 ] 00:31:43.645 [2024-10-07 05:50:47.601827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:43.903 [2024-10-07 05:50:47.812956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:43.903 [2024-10-07 05:50:47.813110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:43.903 [2024-10-07 05:50:47.813131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:44.469 05:50:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:44.469 05:50:48 -- common/autotest_common.sh@852 -- # return 0 00:31:44.469 05:50:48 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:31:44.469 I/O targets: 00:31:44.469 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:31:44.469 00:31:44.469 00:31:44.469 CUnit - A unit testing framework for C - Version 2.1-3 00:31:44.469 http://cunit.sourceforge.net/ 00:31:44.469 00:31:44.469 00:31:44.469 Suite: bdevio tests on: raid5f 00:31:44.469 Test: blockdev write read block ...passed 00:31:44.469 Test: blockdev write zeroes read block ...passed 00:31:44.727 Test: blockdev write zeroes read no split ...passed 00:31:44.727 Test: blockdev write zeroes read split ...passed 00:31:44.727 Test: blockdev write zeroes read split partial ...passed 00:31:44.727 Test: blockdev reset ...passed 00:31:44.727 Test: blockdev write read 8 blocks ...passed 00:31:44.727 Test: blockdev write read size > 128k ...passed 00:31:44.727 Test: blockdev write read invalid size ...passed 00:31:44.727 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:44.727 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:44.727 Test: blockdev write read max offset ...passed 00:31:44.727 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:44.727 Test: blockdev writev readv 8 blocks ...passed 00:31:44.727 Test: blockdev writev readv 30 x 1block ...passed 00:31:44.727 Test: blockdev writev readv block ...passed 00:31:44.727 Test: blockdev writev readv size > 128k ...passed 00:31:44.727 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:44.727 Test: blockdev comparev and writev ...passed 00:31:44.727 Test: blockdev nvme passthru rw ...passed 00:31:44.727 Test: blockdev nvme passthru vendor specific ...passed 00:31:44.727 Test: blockdev nvme admin passthru ...passed 00:31:44.727 Test: blockdev copy ...passed 00:31:44.727 00:31:44.727 Run Summary: Type Total Ran Passed Failed Inactive 00:31:44.727 suites 1 1 n/a 0 0 00:31:44.727 tests 23 23 23 0 0 00:31:44.727 asserts 130 130 130 0 n/a 00:31:44.727 00:31:44.727 Elapsed time = 0.391 seconds 00:31:44.727 0 00:31:44.727 05:50:48 -- bdev/blockdev.sh@293 -- # killprocess 184660 00:31:44.727 05:50:48 -- common/autotest_common.sh@926 -- # '[' -z 184660 ']' 00:31:44.727 05:50:48 -- common/autotest_common.sh@930 -- # kill -0 184660 00:31:44.727 05:50:48 -- common/autotest_common.sh@931 -- # uname 00:31:44.727 05:50:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:44.727 05:50:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 184660 00:31:44.727 05:50:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:44.727 05:50:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:44.727 killing process with pid 184660 00:31:44.727 05:50:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 184660' 00:31:44.727 05:50:48 -- common/autotest_common.sh@945 -- # kill 184660 00:31:44.727 05:50:48 -- common/autotest_common.sh@950 -- # wait 184660 00:31:46.103 05:50:49 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:31:46.103 00:31:46.103 real 0m2.544s 00:31:46.103 user 0m5.872s 00:31:46.103 sys 0m0.442s 00:31:46.103 05:50:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:46.103 05:50:49 -- common/autotest_common.sh@10 -- # set +x 00:31:46.103 ************************************ 00:31:46.103 END TEST bdev_bounds 00:31:46.103 ************************************ 00:31:46.103 05:50:49 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:31:46.103 05:50:49 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:31:46.103 05:50:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:46.103 05:50:49 -- common/autotest_common.sh@10 -- # set +x 00:31:46.103 ************************************ 00:31:46.103 START TEST bdev_nbd 00:31:46.103 ************************************ 00:31:46.103 05:50:49 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:31:46.103 05:50:49 -- bdev/blockdev.sh@298 -- # uname -s 00:31:46.103 05:50:49 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:31:46.103 05:50:49 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:46.103 05:50:49 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:46.103 05:50:49 -- bdev/blockdev.sh@302 -- # bdev_all=('raid5f') 00:31:46.103 05:50:49 -- bdev/blockdev.sh@302 -- # local bdev_all 00:31:46.103 05:50:49 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:31:46.103 05:50:49 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:31:46.103 05:50:49 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:31:46.103 05:50:49 -- bdev/blockdev.sh@309 -- # local nbd_all 00:31:46.103 05:50:49 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:31:46.103 05:50:49 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0') 00:31:46.103 05:50:49 -- bdev/blockdev.sh@312 -- # local nbd_list 00:31:46.103 05:50:49 -- bdev/blockdev.sh@313 -- # bdev_list=('raid5f') 00:31:46.103 05:50:49 -- bdev/blockdev.sh@313 -- # local bdev_list 00:31:46.103 05:50:49 -- bdev/blockdev.sh@316 -- # nbd_pid=184725 00:31:46.103 05:50:49 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:31:46.103 05:50:49 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:31:46.103 05:50:49 -- bdev/blockdev.sh@318 -- # waitforlisten 184725 /var/tmp/spdk-nbd.sock 00:31:46.103 05:50:49 -- common/autotest_common.sh@819 -- # '[' -z 184725 ']' 00:31:46.103 05:50:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:31:46.103 05:50:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:46.103 05:50:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:31:46.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:31:46.104 05:50:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:46.104 05:50:49 -- common/autotest_common.sh@10 -- # set +x 00:31:46.104 [2024-10-07 05:50:50.022207] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:31:46.104 [2024-10-07 05:50:50.022425] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:46.362 [2024-10-07 05:50:50.193641] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.621 [2024-10-07 05:50:50.394203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.187 05:50:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:47.187 05:50:50 -- common/autotest_common.sh@852 -- # return 0 00:31:47.187 05:50:50 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:31:47.187 05:50:50 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:47.187 05:50:50 -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:31:47.187 05:50:50 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:31:47.187 05:50:50 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:31:47.187 05:50:50 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:47.187 05:50:50 -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:31:47.187 05:50:50 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:31:47.187 05:50:50 -- bdev/nbd_common.sh@24 -- # local i 00:31:47.187 05:50:50 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:31:47.187 05:50:50 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:31:47.187 05:50:50 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:31:47.187 05:50:50 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:31:47.445 05:50:51 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:31:47.445 05:50:51 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:31:47.445 05:50:51 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:31:47.445 05:50:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:31:47.445 05:50:51 -- common/autotest_common.sh@857 -- # local i 00:31:47.445 05:50:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:47.445 05:50:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:47.445 05:50:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:31:47.445 05:50:51 -- common/autotest_common.sh@861 -- # break 00:31:47.445 05:50:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:47.445 05:50:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:47.445 05:50:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:47.445 1+0 records in 00:31:47.445 1+0 records out 00:31:47.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283919 s, 14.4 MB/s 00:31:47.445 05:50:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:47.445 05:50:51 -- common/autotest_common.sh@874 -- # size=4096 00:31:47.445 05:50:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:47.445 05:50:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:47.445 05:50:51 -- common/autotest_common.sh@877 -- # return 0 00:31:47.445 05:50:51 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:47.445 05:50:51 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:31:47.445 05:50:51 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:47.704 05:50:51 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:31:47.704 { 00:31:47.704 "nbd_device": "/dev/nbd0", 00:31:47.704 "bdev_name": "raid5f" 00:31:47.704 } 00:31:47.704 ]' 00:31:47.704 05:50:51 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:31:47.704 05:50:51 -- bdev/nbd_common.sh@119 -- # echo '[ 00:31:47.704 { 00:31:47.704 "nbd_device": "/dev/nbd0", 00:31:47.704 "bdev_name": "raid5f" 00:31:47.704 } 00:31:47.704 ]' 00:31:47.704 05:50:51 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:31:47.704 05:50:51 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:47.704 05:50:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:47.704 05:50:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:47.704 05:50:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:47.704 05:50:51 -- bdev/nbd_common.sh@51 -- # local i 00:31:47.704 05:50:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:47.704 05:50:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:47.962 05:50:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:47.962 05:50:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:47.962 05:50:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:47.962 05:50:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:47.962 05:50:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:47.962 05:50:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:47.962 05:50:51 -- bdev/nbd_common.sh@41 -- # break 00:31:47.962 05:50:51 -- bdev/nbd_common.sh@45 -- # return 0 00:31:47.962 05:50:51 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:47.962 05:50:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:47.962 05:50:51 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:48.220 05:50:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:48.220 05:50:52 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:48.220 05:50:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:48.220 05:50:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:48.220 05:50:52 -- bdev/nbd_common.sh@65 -- # echo '' 00:31:48.220 05:50:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:48.220 05:50:52 -- bdev/nbd_common.sh@65 -- # true 00:31:48.220 05:50:52 -- bdev/nbd_common.sh@65 -- # count=0 00:31:48.220 05:50:52 -- bdev/nbd_common.sh@66 -- # echo 0 00:31:48.220 05:50:52 -- bdev/nbd_common.sh@122 -- # count=0 00:31:48.220 05:50:52 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:31:48.220 05:50:52 -- bdev/nbd_common.sh@127 -- # return 0 00:31:48.220 05:50:52 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:31:48.220 05:50:52 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:48.220 05:50:52 -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:31:48.220 05:50:52 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:31:48.220 05:50:52 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:31:48.220 05:50:52 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:31:48.220 05:50:52 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:31:48.220 05:50:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:48.220 05:50:52 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:31:48.220 05:50:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:48.220 05:50:52 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:48.220 05:50:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:48.220 05:50:52 -- bdev/nbd_common.sh@12 -- # local i 00:31:48.220 05:50:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:48.220 05:50:52 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:48.220 05:50:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:31:48.478 /dev/nbd0 00:31:48.478 05:50:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:48.478 05:50:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:48.478 05:50:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:31:48.478 05:50:52 -- common/autotest_common.sh@857 -- # local i 00:31:48.478 05:50:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:48.478 05:50:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:48.478 05:50:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:31:48.478 05:50:52 -- common/autotest_common.sh@861 -- # break 00:31:48.478 05:50:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:48.478 05:50:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:48.478 05:50:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:48.478 1+0 records in 00:31:48.478 1+0 records out 00:31:48.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424729 s, 9.6 MB/s 00:31:48.478 05:50:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:48.478 05:50:52 -- common/autotest_common.sh@874 -- # size=4096 00:31:48.478 05:50:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:48.478 05:50:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:48.478 05:50:52 -- common/autotest_common.sh@877 -- # return 0 00:31:48.478 05:50:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:48.478 05:50:52 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:48.478 05:50:52 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:48.478 05:50:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:48.478 05:50:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:48.737 05:50:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:31:48.737 { 00:31:48.737 "nbd_device": "/dev/nbd0", 00:31:48.737 "bdev_name": "raid5f" 00:31:48.737 } 00:31:48.737 ]' 00:31:48.737 05:50:52 -- bdev/nbd_common.sh@64 -- # echo '[ 00:31:48.737 { 00:31:48.737 "nbd_device": "/dev/nbd0", 00:31:48.737 "bdev_name": "raid5f" 00:31:48.737 } 00:31:48.737 ]' 00:31:48.737 05:50:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:48.737 05:50:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:31:48.737 05:50:52 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:31:48.737 05:50:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:48.737 05:50:52 -- bdev/nbd_common.sh@65 -- # count=1 00:31:48.737 05:50:52 -- bdev/nbd_common.sh@66 -- # echo 1 00:31:48.737 05:50:52 -- bdev/nbd_common.sh@95 -- # count=1 00:31:48.737 05:50:52 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:31:48.737 05:50:52 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:31:48.737 05:50:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:31:48.737 05:50:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:48.737 05:50:52 -- bdev/nbd_common.sh@71 -- # local operation=write 00:31:48.737 05:50:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:48.737 05:50:52 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:31:48.737 05:50:52 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:31:48.737 256+0 records in 00:31:48.737 256+0 records out 00:31:48.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00836169 s, 125 MB/s 00:31:48.737 05:50:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:48.737 05:50:52 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:31:48.737 256+0 records in 00:31:48.737 256+0 records out 00:31:48.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0283485 s, 37.0 MB/s 00:31:48.737 05:50:52 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:31:48.737 05:50:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:31:48.737 05:50:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:48.738 05:50:52 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:31:48.738 05:50:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:48.738 05:50:52 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:31:48.738 05:50:52 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:31:48.738 05:50:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:48.738 05:50:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:31:48.738 05:50:52 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:48.738 05:50:52 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:48.738 05:50:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:48.738 05:50:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:48.738 05:50:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:48.738 05:50:52 -- bdev/nbd_common.sh@51 -- # local i 00:31:48.738 05:50:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:48.738 05:50:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:48.996 05:50:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:48.996 05:50:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:48.996 05:50:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:48.996 05:50:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:48.996 05:50:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:48.996 05:50:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:49.254 05:50:52 -- bdev/nbd_common.sh@41 -- # break 00:31:49.254 05:50:52 -- bdev/nbd_common.sh@45 -- # return 0 00:31:49.254 05:50:52 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:49.254 05:50:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:49.254 05:50:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:49.254 05:50:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:49.254 05:50:53 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:49.254 05:50:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:49.512 05:50:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:49.512 05:50:53 -- bdev/nbd_common.sh@65 -- # echo '' 00:31:49.512 05:50:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:49.512 05:50:53 -- bdev/nbd_common.sh@65 -- # true 00:31:49.512 05:50:53 -- bdev/nbd_common.sh@65 -- # count=0 00:31:49.512 05:50:53 -- bdev/nbd_common.sh@66 -- # echo 0 00:31:49.512 05:50:53 -- bdev/nbd_common.sh@104 -- # count=0 00:31:49.512 05:50:53 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:31:49.512 05:50:53 -- bdev/nbd_common.sh@109 -- # return 0 00:31:49.512 05:50:53 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:49.512 05:50:53 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:49.512 05:50:53 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:31:49.512 05:50:53 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:31:49.512 05:50:53 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:31:49.512 05:50:53 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:31:49.512 malloc_lvol_verify 00:31:49.512 05:50:53 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:31:49.771 0a97629a-d62d-4abb-b20d-27f2ad1aa51b 00:31:49.771 05:50:53 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:31:50.029 9ccccbaa-9c24-42ec-b6c5-0af2ade94a6c 00:31:50.029 05:50:53 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:31:50.288 /dev/nbd0 00:31:50.288 05:50:54 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:31:50.288 mke2fs 1.46.5 (30-Dec-2021) 00:31:50.288 00:31:50.288 Filesystem too small for a journal 00:31:50.288 Discarding device blocks: 0/1024 done 00:31:50.288 Creating filesystem with 1024 4k blocks and 1024 inodes 00:31:50.288 00:31:50.288 Allocating group tables: 0/1 done 00:31:50.288 Writing inode tables: 0/1 done 00:31:50.288 Writing superblocks and filesystem accounting information: 0/1 done 00:31:50.288 00:31:50.288 05:50:54 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:31:50.288 05:50:54 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:50.288 05:50:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:50.288 05:50:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:50.288 05:50:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:50.288 05:50:54 -- bdev/nbd_common.sh@51 -- # local i 00:31:50.288 05:50:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:50.288 05:50:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:50.547 05:50:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:50.547 05:50:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:50.547 05:50:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:50.547 05:50:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:50.547 05:50:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:50.547 05:50:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:50.547 05:50:54 -- bdev/nbd_common.sh@41 -- # break 00:31:50.547 05:50:54 -- bdev/nbd_common.sh@45 -- # return 0 00:31:50.547 05:50:54 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:31:50.547 05:50:54 -- bdev/nbd_common.sh@147 -- # return 0 00:31:50.547 05:50:54 -- bdev/blockdev.sh@324 -- # killprocess 184725 00:31:50.547 05:50:54 -- common/autotest_common.sh@926 -- # '[' -z 184725 ']' 00:31:50.547 05:50:54 -- common/autotest_common.sh@930 -- # kill -0 184725 00:31:50.547 05:50:54 -- common/autotest_common.sh@931 -- # uname 00:31:50.547 05:50:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:50.547 05:50:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 184725 00:31:50.547 05:50:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:50.547 05:50:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:50.547 05:50:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 184725' 00:31:50.547 killing process with pid 184725 00:31:50.547 05:50:54 -- common/autotest_common.sh@945 -- # kill 184725 00:31:50.547 05:50:54 -- common/autotest_common.sh@950 -- # wait 184725 00:31:51.924 05:50:55 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:31:51.924 00:31:51.924 real 0m5.678s 00:31:51.924 user 0m7.927s 00:31:51.924 sys 0m1.311s 00:31:51.924 05:50:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:51.924 ************************************ 00:31:51.924 END TEST bdev_nbd 00:31:51.924 ************************************ 00:31:51.924 05:50:55 -- common/autotest_common.sh@10 -- # set +x 00:31:51.924 05:50:55 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:31:51.924 05:50:55 -- bdev/blockdev.sh@762 -- # '[' raid5f = nvme ']' 00:31:51.924 05:50:55 -- bdev/blockdev.sh@762 -- # '[' raid5f = gpt ']' 00:31:51.924 05:50:55 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:31:51.924 05:50:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:51.924 05:50:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:51.924 05:50:55 -- common/autotest_common.sh@10 -- # set +x 00:31:51.924 ************************************ 00:31:51.924 START TEST bdev_fio 00:31:51.924 ************************************ 00:31:51.924 05:50:55 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:31:51.924 05:50:55 -- bdev/blockdev.sh@329 -- # local env_context 00:31:51.924 05:50:55 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:31:51.924 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:31:51.924 05:50:55 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:31:51.924 05:50:55 -- bdev/blockdev.sh@337 -- # echo '' 00:31:51.924 05:50:55 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:31:51.924 05:50:55 -- bdev/blockdev.sh@337 -- # env_context= 00:31:51.924 05:50:55 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:31:51.924 05:50:55 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:51.924 05:50:55 -- common/autotest_common.sh@1260 -- # local workload=verify 00:31:51.924 05:50:55 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:31:51.924 05:50:55 -- common/autotest_common.sh@1262 -- # local env_context= 00:31:51.924 05:50:55 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:31:51.924 05:50:55 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:31:51.924 05:50:55 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:31:51.924 05:50:55 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:31:51.924 05:50:55 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:51.924 05:50:55 -- common/autotest_common.sh@1280 -- # cat 00:31:51.924 05:50:55 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:31:51.924 05:50:55 -- common/autotest_common.sh@1293 -- # cat 00:31:51.924 05:50:55 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:31:51.924 05:50:55 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:31:51.924 05:50:55 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:31:51.924 05:50:55 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:31:51.924 05:50:55 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:31:51.924 05:50:55 -- bdev/blockdev.sh@340 -- # echo '[job_raid5f]' 00:31:51.924 05:50:55 -- bdev/blockdev.sh@341 -- # echo filename=raid5f 00:31:51.924 05:50:55 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:31:51.924 05:50:55 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:31:51.924 05:50:55 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:31:51.924 05:50:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:51.924 05:50:55 -- common/autotest_common.sh@10 -- # set +x 00:31:51.924 ************************************ 00:31:51.924 START TEST bdev_fio_rw_verify 00:31:51.924 ************************************ 00:31:51.924 05:50:55 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:31:51.924 05:50:55 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:31:51.924 05:50:55 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:51.924 05:50:55 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:51.924 05:50:55 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:51.924 05:50:55 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:51.924 05:50:55 -- common/autotest_common.sh@1320 -- # shift 00:31:51.924 05:50:55 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:51.924 05:50:55 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:51.924 05:50:55 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:51.924 05:50:55 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:51.924 05:50:55 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:51.924 05:50:55 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:31:51.924 05:50:55 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:31:51.924 05:50:55 -- common/autotest_common.sh@1326 -- # break 00:31:51.924 05:50:55 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:51.924 05:50:55 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:31:52.182 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:52.182 fio-3.35 00:31:52.182 Starting 1 thread 00:32:04.380 00:32:04.381 job_raid5f: (groupid=0, jobs=1): err= 0: pid=184968: Mon Oct 7 05:51:06 2024 00:32:04.381 read: IOPS=12.1k, BW=47.4MiB/s (49.7MB/s)(474MiB/10001msec) 00:32:04.381 slat (nsec): min=18468, max=68289, avg=19664.49, stdev=2012.82 00:32:04.381 clat (usec): min=12, max=370, avg=131.28, stdev=46.47 00:32:04.381 lat (usec): min=33, max=402, avg=150.94, stdev=47.09 00:32:04.381 clat percentiles (usec): 00:32:04.381 | 50.000th=[ 139], 99.000th=[ 217], 99.900th=[ 318], 99.990th=[ 334], 00:32:04.381 | 99.999th=[ 371] 00:32:04.381 write: IOPS=12.8k, BW=49.8MiB/s (52.3MB/s)(492MiB/9874msec); 0 zone resets 00:32:04.381 slat (usec): min=9, max=181, avg=16.92, stdev= 2.98 00:32:04.381 clat (usec): min=59, max=1107, avg=300.39, stdev=40.01 00:32:04.381 lat (usec): min=75, max=1281, avg=317.31, stdev=41.24 00:32:04.381 clat percentiles (usec): 00:32:04.381 | 50.000th=[ 302], 99.000th=[ 400], 99.900th=[ 529], 99.990th=[ 848], 00:32:04.381 | 99.999th=[ 1057] 00:32:04.381 bw ( KiB/s): min=45976, max=54032, per=98.80%, avg=50418.11, stdev=1951.44, samples=19 00:32:04.381 iops : min=11494, max=13508, avg=12604.53, stdev=487.86, samples=19 00:32:04.381 lat (usec) : 20=0.01%, 50=0.01%, 100=15.26%, 250=39.02%, 500=45.61% 00:32:04.381 lat (usec) : 750=0.10%, 1000=0.01% 00:32:04.381 lat (msec) : 2=0.01% 00:32:04.381 cpu : usr=99.47%, sys=0.51%, ctx=37, majf=0, minf=8649 00:32:04.381 IO depths : 1=7.6%, 2=19.8%, 4=55.2%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:04.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.381 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.381 issued rwts: total=121355,125972,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:04.381 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:04.381 00:32:04.381 Run status group 0 (all jobs): 00:32:04.381 READ: bw=47.4MiB/s (49.7MB/s), 47.4MiB/s-47.4MiB/s (49.7MB/s-49.7MB/s), io=474MiB (497MB), run=10001-10001msec 00:32:04.381 WRITE: bw=49.8MiB/s (52.3MB/s), 49.8MiB/s-49.8MiB/s (52.3MB/s-52.3MB/s), io=492MiB (516MB), run=9874-9874msec 00:32:04.381 ----------------------------------------------------- 00:32:04.381 Suppressions used: 00:32:04.381 count bytes template 00:32:04.381 1 7 /usr/src/fio/parse.c 00:32:04.381 844 81024 /usr/src/fio/iolog.c 00:32:04.381 1 904 libcrypto.so 00:32:04.381 ----------------------------------------------------- 00:32:04.381 00:32:04.381 00:32:04.381 real 0m12.317s 00:32:04.381 user 0m12.785s 00:32:04.381 sys 0m0.700s 00:32:04.381 05:51:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:04.381 05:51:08 -- common/autotest_common.sh@10 -- # set +x 00:32:04.381 ************************************ 00:32:04.381 END TEST bdev_fio_rw_verify 00:32:04.381 ************************************ 00:32:04.381 05:51:08 -- bdev/blockdev.sh@348 -- # rm -f 00:32:04.381 05:51:08 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:04.381 05:51:08 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:32:04.381 05:51:08 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:04.381 05:51:08 -- common/autotest_common.sh@1260 -- # local workload=trim 00:32:04.381 05:51:08 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:32:04.381 05:51:08 -- common/autotest_common.sh@1262 -- # local env_context= 00:32:04.381 05:51:08 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:32:04.381 05:51:08 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:32:04.381 05:51:08 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:32:04.381 05:51:08 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:32:04.381 05:51:08 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:04.381 05:51:08 -- common/autotest_common.sh@1280 -- # cat 00:32:04.381 05:51:08 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:32:04.381 05:51:08 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:32:04.381 05:51:08 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:32:04.381 05:51:08 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "a15af071-2ffa-4f48-8338-da2f158665e5"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "a15af071-2ffa-4f48-8338-da2f158665e5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "a15af071-2ffa-4f48-8338-da2f158665e5",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "6c577815-d5d8-4dc6-a91d-0909b5c3fdbc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "e93f8e3e-d78b-49a5-a113-70ac054671f3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "6bd1e811-ca64-419c-a639-2d5f67538de3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:32:04.381 05:51:08 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:32:04.381 05:51:08 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:32:04.381 05:51:08 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:04.381 05:51:08 -- bdev/blockdev.sh@360 -- # popd 00:32:04.381 /home/vagrant/spdk_repo/spdk 00:32:04.381 05:51:08 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:32:04.381 05:51:08 -- bdev/blockdev.sh@362 -- # return 0 00:32:04.381 00:32:04.381 real 0m12.499s 00:32:04.381 user 0m12.891s 00:32:04.381 sys 0m0.775s 00:32:04.381 05:51:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:04.381 05:51:08 -- common/autotest_common.sh@10 -- # set +x 00:32:04.381 ************************************ 00:32:04.381 END TEST bdev_fio 00:32:04.381 ************************************ 00:32:04.381 05:51:08 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:04.381 05:51:08 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:32:04.381 05:51:08 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:32:04.381 05:51:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:04.381 05:51:08 -- common/autotest_common.sh@10 -- # set +x 00:32:04.381 ************************************ 00:32:04.381 START TEST bdev_verify 00:32:04.381 ************************************ 00:32:04.381 05:51:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:32:04.381 [2024-10-07 05:51:08.316764] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:32:04.381 [2024-10-07 05:51:08.316975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid185137 ] 00:32:04.639 [2024-10-07 05:51:08.492961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:04.898 [2024-10-07 05:51:08.699440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:04.898 [2024-10-07 05:51:08.699456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.465 Running I/O for 5 seconds... 00:32:10.769 00:32:10.769 Latency(us) 00:32:10.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:10.769 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:10.769 Verification LBA range: start 0x0 length 0x2000 00:32:10.769 raid5f : 5.01 8376.80 32.72 0.00 0.00 24230.15 141.50 19660.80 00:32:10.769 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:10.769 Verification LBA range: start 0x2000 length 0x2000 00:32:10.769 raid5f : 5.01 8228.25 32.14 0.00 0.00 24665.07 271.83 20971.52 00:32:10.769 =================================================================================================================== 00:32:10.769 Total : 16605.05 64.86 0.00 0.00 24445.63 141.50 20971.52 00:32:11.705 00:32:11.705 real 0m7.206s 00:32:11.705 user 0m13.087s 00:32:11.705 sys 0m0.365s 00:32:11.705 05:51:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:11.705 05:51:15 -- common/autotest_common.sh@10 -- # set +x 00:32:11.705 ************************************ 00:32:11.705 END TEST bdev_verify 00:32:11.705 ************************************ 00:32:11.705 05:51:15 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:11.705 05:51:15 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:32:11.705 05:51:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:11.705 05:51:15 -- common/autotest_common.sh@10 -- # set +x 00:32:11.705 ************************************ 00:32:11.705 START TEST bdev_verify_big_io 00:32:11.705 ************************************ 00:32:11.705 05:51:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:11.705 [2024-10-07 05:51:15.565102] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:32:11.706 [2024-10-07 05:51:15.565498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid185246 ] 00:32:11.963 [2024-10-07 05:51:15.738777] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:11.963 [2024-10-07 05:51:15.930456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:11.963 [2024-10-07 05:51:15.930470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:12.530 Running I/O for 5 seconds... 00:32:17.795 00:32:17.795 Latency(us) 00:32:17.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.795 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:17.795 Verification LBA range: start 0x0 length 0x200 00:32:17.795 raid5f : 5.18 610.14 38.13 0.00 0.00 5460931.71 138.71 180164.42 00:32:17.795 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:17.795 Verification LBA range: start 0x200 length 0x200 00:32:17.795 raid5f : 5.18 617.05 38.57 0.00 0.00 5397545.57 368.64 178257.92 00:32:17.795 =================================================================================================================== 00:32:17.795 Total : 1227.20 76.70 0.00 0.00 5429064.26 138.71 180164.42 00:32:19.174 00:32:19.174 real 0m7.350s 00:32:19.174 user 0m13.438s 00:32:19.174 sys 0m0.340s 00:32:19.174 05:51:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:19.174 05:51:22 -- common/autotest_common.sh@10 -- # set +x 00:32:19.174 ************************************ 00:32:19.174 END TEST bdev_verify_big_io 00:32:19.174 ************************************ 00:32:19.174 05:51:22 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:19.174 05:51:22 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:32:19.174 05:51:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:19.174 05:51:22 -- common/autotest_common.sh@10 -- # set +x 00:32:19.174 ************************************ 00:32:19.174 START TEST bdev_write_zeroes 00:32:19.174 ************************************ 00:32:19.174 05:51:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:19.174 [2024-10-07 05:51:22.957352] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:32:19.174 [2024-10-07 05:51:22.957496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid185361 ] 00:32:19.174 [2024-10-07 05:51:23.110019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.433 [2024-10-07 05:51:23.294380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:20.001 Running I/O for 1 seconds... 00:32:20.938 00:32:20.938 Latency(us) 00:32:20.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:20.938 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:20.938 raid5f : 1.00 27711.11 108.25 0.00 0.00 4603.34 1474.56 5540.77 00:32:20.938 =================================================================================================================== 00:32:20.938 Total : 27711.11 108.25 0.00 0.00 4603.34 1474.56 5540.77 00:32:22.317 00:32:22.317 real 0m3.105s 00:32:22.317 user 0m2.696s 00:32:22.317 sys 0m0.297s 00:32:22.317 05:51:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:22.317 05:51:26 -- common/autotest_common.sh@10 -- # set +x 00:32:22.317 ************************************ 00:32:22.317 END TEST bdev_write_zeroes 00:32:22.317 ************************************ 00:32:22.317 05:51:26 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:22.317 05:51:26 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:32:22.317 05:51:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:22.317 05:51:26 -- common/autotest_common.sh@10 -- # set +x 00:32:22.317 ************************************ 00:32:22.317 START TEST bdev_json_nonenclosed 00:32:22.317 ************************************ 00:32:22.317 05:51:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:22.317 [2024-10-07 05:51:26.139527] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:32:22.317 [2024-10-07 05:51:26.139750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid185417 ] 00:32:22.576 [2024-10-07 05:51:26.310731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.576 [2024-10-07 05:51:26.505751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.576 [2024-10-07 05:51:26.505983] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:32:22.576 [2024-10-07 05:51:26.506024] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:23.144 00:32:23.144 real 0m0.780s 00:32:23.144 user 0m0.504s 00:32:23.144 sys 0m0.177s 00:32:23.144 05:51:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:23.144 05:51:26 -- common/autotest_common.sh@10 -- # set +x 00:32:23.144 ************************************ 00:32:23.144 END TEST bdev_json_nonenclosed 00:32:23.144 ************************************ 00:32:23.144 05:51:26 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:23.144 05:51:26 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:32:23.144 05:51:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:23.144 05:51:26 -- common/autotest_common.sh@10 -- # set +x 00:32:23.144 ************************************ 00:32:23.144 START TEST bdev_json_nonarray 00:32:23.144 ************************************ 00:32:23.144 05:51:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:23.144 [2024-10-07 05:51:26.970251] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:32:23.144 [2024-10-07 05:51:26.970480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid185446 ] 00:32:23.404 [2024-10-07 05:51:27.142086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.404 [2024-10-07 05:51:27.337937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.404 [2024-10-07 05:51:27.338169] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:32:23.404 [2024-10-07 05:51:27.338224] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:23.972 00:32:23.972 real 0m0.773s 00:32:23.972 user 0m0.512s 00:32:23.972 sys 0m0.162s 00:32:23.972 05:51:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:23.972 05:51:27 -- common/autotest_common.sh@10 -- # set +x 00:32:23.972 ************************************ 00:32:23.972 END TEST bdev_json_nonarray 00:32:23.972 ************************************ 00:32:23.972 05:51:27 -- bdev/blockdev.sh@785 -- # [[ raid5f == bdev ]] 00:32:23.972 05:51:27 -- bdev/blockdev.sh@792 -- # [[ raid5f == gpt ]] 00:32:23.972 05:51:27 -- bdev/blockdev.sh@796 -- # [[ raid5f == crypto_sw ]] 00:32:23.972 05:51:27 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:32:23.972 05:51:27 -- bdev/blockdev.sh@809 -- # cleanup 00:32:23.972 05:51:27 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:32:23.972 05:51:27 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:32:23.972 05:51:27 -- bdev/blockdev.sh@24 -- # [[ raid5f == rbd ]] 00:32:23.972 05:51:27 -- bdev/blockdev.sh@28 -- # [[ raid5f == daos ]] 00:32:23.972 05:51:27 -- bdev/blockdev.sh@32 -- # [[ raid5f = \g\p\t ]] 00:32:23.972 05:51:27 -- bdev/blockdev.sh@38 -- # [[ raid5f == xnvme ]] 00:32:23.972 00:32:23.972 real 0m46.908s 00:32:23.972 user 1m3.330s 00:32:23.972 sys 0m5.114s 00:32:23.972 05:51:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:23.972 05:51:27 -- common/autotest_common.sh@10 -- # set +x 00:32:23.972 ************************************ 00:32:23.972 END TEST blockdev_raid5f 00:32:23.972 ************************************ 00:32:23.972 05:51:27 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:32:23.972 05:51:27 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:32:23.972 05:51:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:23.972 05:51:27 -- common/autotest_common.sh@10 -- # set +x 00:32:23.972 05:51:27 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:32:23.972 05:51:27 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:32:23.972 05:51:27 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:32:23.972 05:51:27 -- common/autotest_common.sh@10 -- # set +x 00:32:25.873 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:32:25.873 Waiting for block devices as requested 00:32:25.873 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:32:26.132 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:32:26.132 Cleaning 00:32:26.132 Removing: /var/run/dpdk/spdk0/config 00:32:26.132 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:26.132 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:26.132 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:26.132 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:26.132 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:26.132 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:26.132 Removing: /dev/shm/spdk_tgt_trace.pid103607 00:32:26.132 Removing: /var/run/dpdk/spdk0 00:32:26.132 Removing: /var/run/dpdk/spdk_pid103351 00:32:26.132 Removing: /var/run/dpdk/spdk_pid103607 00:32:26.132 Removing: /var/run/dpdk/spdk_pid103910 00:32:26.132 Removing: /var/run/dpdk/spdk_pid104169 00:32:26.132 Removing: /var/run/dpdk/spdk_pid104351 00:32:26.132 Removing: /var/run/dpdk/spdk_pid104475 00:32:26.132 Removing: /var/run/dpdk/spdk_pid104589 00:32:26.132 Removing: /var/run/dpdk/spdk_pid104715 00:32:26.132 Removing: /var/run/dpdk/spdk_pid104817 00:32:26.132 Removing: /var/run/dpdk/spdk_pid104868 00:32:26.132 Removing: /var/run/dpdk/spdk_pid104918 00:32:26.132 Removing: /var/run/dpdk/spdk_pid104996 00:32:26.132 Removing: /var/run/dpdk/spdk_pid105119 00:32:26.132 Removing: /var/run/dpdk/spdk_pid105998 00:32:26.132 Removing: /var/run/dpdk/spdk_pid106173 00:32:26.132 Removing: /var/run/dpdk/spdk_pid106381 00:32:26.132 Removing: /var/run/dpdk/spdk_pid106438 00:32:26.132 Removing: /var/run/dpdk/spdk_pid106902 00:32:26.132 Removing: /var/run/dpdk/spdk_pid106958 00:32:26.132 Removing: /var/run/dpdk/spdk_pid107346 00:32:26.132 Removing: /var/run/dpdk/spdk_pid107407 00:32:26.132 Removing: /var/run/dpdk/spdk_pid107543 00:32:26.132 Removing: /var/run/dpdk/spdk_pid107691 00:32:26.132 Removing: /var/run/dpdk/spdk_pid107788 00:32:26.132 Removing: /var/run/dpdk/spdk_pid107883 00:32:26.132 Removing: /var/run/dpdk/spdk_pid108283 00:32:26.132 Removing: /var/run/dpdk/spdk_pid108371 00:32:26.132 Removing: /var/run/dpdk/spdk_pid108513 00:32:26.132 Removing: /var/run/dpdk/spdk_pid108636 00:32:26.132 Removing: /var/run/dpdk/spdk_pid108825 00:32:26.132 Removing: /var/run/dpdk/spdk_pid108880 00:32:26.132 Removing: /var/run/dpdk/spdk_pid109051 00:32:26.132 Removing: /var/run/dpdk/spdk_pid109141 00:32:26.391 Removing: /var/run/dpdk/spdk_pid109245 00:32:26.391 Removing: /var/run/dpdk/spdk_pid109342 00:32:26.391 Removing: /var/run/dpdk/spdk_pid109482 00:32:26.391 Removing: /var/run/dpdk/spdk_pid109557 00:32:26.391 Removing: /var/run/dpdk/spdk_pid109653 00:32:26.391 Removing: /var/run/dpdk/spdk_pid109796 00:32:26.391 Removing: /var/run/dpdk/spdk_pid109893 00:32:26.391 Removing: /var/run/dpdk/spdk_pid109995 00:32:26.391 Removing: /var/run/dpdk/spdk_pid110093 00:32:26.391 Removing: /var/run/dpdk/spdk_pid110221 00:32:26.391 Removing: /var/run/dpdk/spdk_pid110308 00:32:26.391 Removing: /var/run/dpdk/spdk_pid110394 00:32:26.391 Removing: /var/run/dpdk/spdk_pid110571 00:32:26.391 Removing: /var/run/dpdk/spdk_pid110642 00:32:26.391 Removing: /var/run/dpdk/spdk_pid110746 00:32:26.391 Removing: /var/run/dpdk/spdk_pid110881 00:32:26.391 Removing: /var/run/dpdk/spdk_pid110995 00:32:26.391 Removing: /var/run/dpdk/spdk_pid111097 00:32:26.391 Removing: /var/run/dpdk/spdk_pid111210 00:32:26.391 Removing: /var/run/dpdk/spdk_pid111327 00:32:26.391 Removing: /var/run/dpdk/spdk_pid111416 00:32:26.391 Removing: /var/run/dpdk/spdk_pid111509 00:32:26.391 Removing: /var/run/dpdk/spdk_pid111686 00:32:26.391 Removing: /var/run/dpdk/spdk_pid111751 00:32:26.391 Removing: /var/run/dpdk/spdk_pid111859 00:32:26.391 Removing: /var/run/dpdk/spdk_pid111927 00:32:26.391 Removing: /var/run/dpdk/spdk_pid111980 00:32:26.391 Removing: /var/run/dpdk/spdk_pid112070 00:32:26.391 Removing: /var/run/dpdk/spdk_pid112344 00:32:26.391 Removing: /var/run/dpdk/spdk_pid112963 00:32:26.391 Removing: /var/run/dpdk/spdk_pid113990 00:32:26.391 Removing: /var/run/dpdk/spdk_pid114932 00:32:26.391 Removing: /var/run/dpdk/spdk_pid116211 00:32:26.391 Removing: /var/run/dpdk/spdk_pid116995 00:32:26.391 Removing: /var/run/dpdk/spdk_pid117721 00:32:26.391 Removing: /var/run/dpdk/spdk_pid117870 00:32:26.391 Removing: /var/run/dpdk/spdk_pid117955 00:32:26.391 Removing: /var/run/dpdk/spdk_pid118049 00:32:26.391 Removing: /var/run/dpdk/spdk_pid118236 00:32:26.391 Removing: /var/run/dpdk/spdk_pid118375 00:32:26.391 Removing: /var/run/dpdk/spdk_pid118600 00:32:26.391 Removing: /var/run/dpdk/spdk_pid118915 00:32:26.391 Removing: /var/run/dpdk/spdk_pid119122 00:32:26.391 Removing: /var/run/dpdk/spdk_pid119295 00:32:26.391 Removing: /var/run/dpdk/spdk_pid121462 00:32:26.392 Removing: /var/run/dpdk/spdk_pid123961 00:32:26.392 Removing: /var/run/dpdk/spdk_pid126804 00:32:26.392 Removing: /var/run/dpdk/spdk_pid127240 00:32:26.392 Removing: /var/run/dpdk/spdk_pid127546 00:32:26.392 Removing: /var/run/dpdk/spdk_pid127720 00:32:26.392 Removing: /var/run/dpdk/spdk_pid127790 00:32:26.392 Removing: /var/run/dpdk/spdk_pid127822 00:32:26.392 Removing: /var/run/dpdk/spdk_pid132998 00:32:26.392 Removing: /var/run/dpdk/spdk_pid133679 00:32:26.392 Removing: /var/run/dpdk/spdk_pid134054 00:32:26.392 Removing: /var/run/dpdk/spdk_pid134193 00:32:26.392 Removing: /var/run/dpdk/spdk_pid136517 00:32:26.392 Removing: /var/run/dpdk/spdk_pid138451 00:32:26.392 Removing: /var/run/dpdk/spdk_pid140316 00:32:26.392 Removing: /var/run/dpdk/spdk_pid142921 00:32:26.392 Removing: /var/run/dpdk/spdk_pid145755 00:32:26.392 Removing: /var/run/dpdk/spdk_pid148068 00:32:26.392 Removing: /var/run/dpdk/spdk_pid151387 00:32:26.392 Removing: /var/run/dpdk/spdk_pid153982 00:32:26.392 Removing: /var/run/dpdk/spdk_pid156674 00:32:26.392 Removing: /var/run/dpdk/spdk_pid164500 00:32:26.392 Removing: /var/run/dpdk/spdk_pid165936 00:32:26.392 Removing: /var/run/dpdk/spdk_pid167299 00:32:26.392 Removing: /var/run/dpdk/spdk_pid167765 00:32:26.392 Removing: /var/run/dpdk/spdk_pid168331 00:32:26.392 Removing: /var/run/dpdk/spdk_pid168882 00:32:26.392 Removing: /var/run/dpdk/spdk_pid169533 00:32:26.392 Removing: /var/run/dpdk/spdk_pid170042 00:32:26.392 Removing: /var/run/dpdk/spdk_pid171403 00:32:26.392 Removing: /var/run/dpdk/spdk_pid171995 00:32:26.392 Removing: /var/run/dpdk/spdk_pid172532 00:32:26.392 Removing: /var/run/dpdk/spdk_pid174039 00:32:26.651 Removing: /var/run/dpdk/spdk_pid174706 00:32:26.651 Removing: /var/run/dpdk/spdk_pid175329 00:32:26.651 Removing: /var/run/dpdk/spdk_pid176087 00:32:26.651 Removing: /var/run/dpdk/spdk_pid176148 00:32:26.651 Removing: /var/run/dpdk/spdk_pid176204 00:32:26.651 Removing: /var/run/dpdk/spdk_pid176263 00:32:26.651 Removing: /var/run/dpdk/spdk_pid176398 00:32:26.651 Removing: /var/run/dpdk/spdk_pid176546 00:32:26.651 Removing: /var/run/dpdk/spdk_pid176768 00:32:26.651 Removing: /var/run/dpdk/spdk_pid177062 00:32:26.651 Removing: /var/run/dpdk/spdk_pid177079 00:32:26.651 Removing: /var/run/dpdk/spdk_pid177130 00:32:26.651 Removing: /var/run/dpdk/spdk_pid177165 00:32:26.651 Removing: /var/run/dpdk/spdk_pid177194 00:32:26.651 Removing: /var/run/dpdk/spdk_pid177226 00:32:26.651 Removing: /var/run/dpdk/spdk_pid177253 00:32:26.651 Removing: /var/run/dpdk/spdk_pid177287 00:32:26.651 Removing: /var/run/dpdk/spdk_pid177320 00:32:26.651 Removing: /var/run/dpdk/spdk_pid177340 00:32:26.651 Removing: /var/run/dpdk/spdk_pid177373 00:32:26.651 Removing: /var/run/dpdk/spdk_pid177404 00:32:26.651 Removing: /var/run/dpdk/spdk_pid177434 00:32:26.651 Removing: /var/run/dpdk/spdk_pid177466 00:32:26.651 Removing: /var/run/dpdk/spdk_pid177494 00:32:26.651 Removing: /var/run/dpdk/spdk_pid177526 00:32:26.651 Removing: /var/run/dpdk/spdk_pid177553 00:32:26.651 Removing: /var/run/dpdk/spdk_pid177588 00:32:26.651 Removing: /var/run/dpdk/spdk_pid177612 00:32:26.651 Removing: /var/run/dpdk/spdk_pid177641 00:32:26.651 Removing: /var/run/dpdk/spdk_pid177694 00:32:26.651 Removing: /var/run/dpdk/spdk_pid177724 00:32:26.651 Removing: /var/run/dpdk/spdk_pid177770 00:32:26.651 Removing: /var/run/dpdk/spdk_pid177851 00:32:26.651 Removing: /var/run/dpdk/spdk_pid177898 00:32:26.651 Removing: /var/run/dpdk/spdk_pid177925 00:32:26.651 Removing: /var/run/dpdk/spdk_pid177970 00:32:26.651 Removing: /var/run/dpdk/spdk_pid177996 00:32:26.651 Removing: /var/run/dpdk/spdk_pid178020 00:32:26.651 Removing: /var/run/dpdk/spdk_pid178084 00:32:26.651 Removing: /var/run/dpdk/spdk_pid178107 00:32:26.651 Removing: /var/run/dpdk/spdk_pid178151 00:32:26.651 Removing: /var/run/dpdk/spdk_pid178179 00:32:26.651 Removing: /var/run/dpdk/spdk_pid178200 00:32:26.651 Removing: /var/run/dpdk/spdk_pid178227 00:32:26.651 Removing: /var/run/dpdk/spdk_pid178244 00:32:26.651 Removing: /var/run/dpdk/spdk_pid178272 00:32:26.651 Removing: /var/run/dpdk/spdk_pid178290 00:32:26.651 Removing: /var/run/dpdk/spdk_pid178314 00:32:26.651 Removing: /var/run/dpdk/spdk_pid178362 00:32:26.651 Removing: /var/run/dpdk/spdk_pid178407 00:32:26.651 Removing: /var/run/dpdk/spdk_pid178435 00:32:26.651 Removing: /var/run/dpdk/spdk_pid178473 00:32:26.651 Removing: /var/run/dpdk/spdk_pid178505 00:32:26.651 Removing: /var/run/dpdk/spdk_pid178521 00:32:26.651 Removing: /var/run/dpdk/spdk_pid178585 00:32:26.651 Removing: /var/run/dpdk/spdk_pid178613 00:32:26.651 Removing: /var/run/dpdk/spdk_pid178655 00:32:26.651 Removing: /var/run/dpdk/spdk_pid178682 00:32:26.651 Removing: /var/run/dpdk/spdk_pid178699 00:32:26.651 Removing: /var/run/dpdk/spdk_pid178727 00:32:26.651 Removing: /var/run/dpdk/spdk_pid178752 00:32:26.651 Removing: /var/run/dpdk/spdk_pid178771 00:32:26.651 Removing: /var/run/dpdk/spdk_pid178800 00:32:26.651 Removing: /var/run/dpdk/spdk_pid178824 00:32:26.651 Removing: /var/run/dpdk/spdk_pid178918 00:32:26.651 Removing: /var/run/dpdk/spdk_pid179013 00:32:26.651 Removing: /var/run/dpdk/spdk_pid179159 00:32:26.651 Removing: /var/run/dpdk/spdk_pid179191 00:32:26.651 Removing: /var/run/dpdk/spdk_pid179250 00:32:26.651 Removing: /var/run/dpdk/spdk_pid179313 00:32:26.651 Removing: /var/run/dpdk/spdk_pid179358 00:32:26.651 Removing: /var/run/dpdk/spdk_pid179389 00:32:26.651 Removing: /var/run/dpdk/spdk_pid179424 00:32:26.651 Removing: /var/run/dpdk/spdk_pid179473 00:32:26.651 Removing: /var/run/dpdk/spdk_pid179502 00:32:26.651 Removing: /var/run/dpdk/spdk_pid179588 00:32:26.651 Removing: /var/run/dpdk/spdk_pid179656 00:32:26.651 Removing: /var/run/dpdk/spdk_pid179707 00:32:26.651 Removing: /var/run/dpdk/spdk_pid179978 00:32:26.651 Removing: /var/run/dpdk/spdk_pid180110 00:32:26.910 Removing: /var/run/dpdk/spdk_pid180160 00:32:26.910 Removing: /var/run/dpdk/spdk_pid180254 00:32:26.910 Removing: /var/run/dpdk/spdk_pid180347 00:32:26.910 Removing: /var/run/dpdk/spdk_pid180397 00:32:26.910 Removing: /var/run/dpdk/spdk_pid180647 00:32:26.910 Removing: /var/run/dpdk/spdk_pid180797 00:32:26.910 Removing: /var/run/dpdk/spdk_pid180905 00:32:26.910 Removing: /var/run/dpdk/spdk_pid180962 00:32:26.910 Removing: /var/run/dpdk/spdk_pid180992 00:32:26.910 Removing: /var/run/dpdk/spdk_pid181069 00:32:26.910 Removing: /var/run/dpdk/spdk_pid181513 00:32:26.910 Removing: /var/run/dpdk/spdk_pid181557 00:32:26.910 Removing: /var/run/dpdk/spdk_pid181873 00:32:26.910 Removing: /var/run/dpdk/spdk_pid181993 00:32:26.910 Removing: /var/run/dpdk/spdk_pid182101 00:32:26.910 Removing: /var/run/dpdk/spdk_pid182158 00:32:26.910 Removing: /var/run/dpdk/spdk_pid182196 00:32:26.910 Removing: /var/run/dpdk/spdk_pid182234 00:32:26.910 Removing: /var/run/dpdk/spdk_pid183599 00:32:26.910 Removing: /var/run/dpdk/spdk_pid183746 00:32:26.910 Removing: /var/run/dpdk/spdk_pid183751 00:32:26.910 Removing: /var/run/dpdk/spdk_pid183768 00:32:26.910 Removing: /var/run/dpdk/spdk_pid184258 00:32:26.911 Removing: /var/run/dpdk/spdk_pid184378 00:32:26.911 Removing: /var/run/dpdk/spdk_pid184529 00:32:26.911 Removing: /var/run/dpdk/spdk_pid184615 00:32:26.911 Removing: /var/run/dpdk/spdk_pid184660 00:32:26.911 Removing: /var/run/dpdk/spdk_pid184948 00:32:26.911 Removing: /var/run/dpdk/spdk_pid185137 00:32:26.911 Removing: /var/run/dpdk/spdk_pid185246 00:32:26.911 Removing: /var/run/dpdk/spdk_pid185361 00:32:26.911 Removing: /var/run/dpdk/spdk_pid185417 00:32:26.911 Removing: /var/run/dpdk/spdk_pid185446 00:32:26.911 Clean 00:32:26.911 killing process with pid 92446 00:32:26.911 killing process with pid 92447 00:32:26.911 05:51:30 -- common/autotest_common.sh@1436 -- # return 0 00:32:26.911 05:51:30 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:32:26.911 05:51:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:26.911 05:51:30 -- common/autotest_common.sh@10 -- # set +x 00:32:27.170 05:51:30 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:32:27.170 05:51:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:27.170 05:51:30 -- common/autotest_common.sh@10 -- # set +x 00:32:27.170 05:51:30 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:27.170 05:51:30 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:27.170 05:51:30 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:27.170 05:51:30 -- spdk/autotest.sh@394 -- # hash lcov 00:32:27.170 05:51:30 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:27.170 05:51:30 -- spdk/autotest.sh@396 -- # hostname 00:32:27.170 05:51:30 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:27.428 geninfo: WARNING: invalid characters removed from testname! 00:33:06.206 05:52:07 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:08.738 05:52:12 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:11.272 05:52:15 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:14.567 05:52:18 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:17.855 05:52:21 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:20.389 05:52:23 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:23.677 05:52:26 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:23.677 05:52:26 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:23.678 05:52:26 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:23.678 05:52:26 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:23.678 05:52:26 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:23.678 05:52:26 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:23.678 05:52:26 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:23.678 05:52:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:23.678 05:52:26 -- paths/export.sh@5 -- $ export PATH 00:33:23.678 05:52:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:23.678 05:52:26 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:33:23.678 05:52:26 -- common/autobuild_common.sh@440 -- $ date +%s 00:33:23.678 05:52:26 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1728280346.XXXXXX 00:33:23.678 05:52:26 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1728280346.7Bgkvg 00:33:23.678 05:52:26 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:33:23.678 05:52:26 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:33:23.678 05:52:26 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:33:23.678 05:52:26 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:33:23.678 05:52:26 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:33:23.678 05:52:26 -- common/autobuild_common.sh@456 -- $ get_config_params 00:33:23.678 05:52:26 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:33:23.678 05:52:26 -- common/autotest_common.sh@10 -- $ set +x 00:33:23.678 05:52:27 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:33:23.678 05:52:27 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:33:23.678 05:52:27 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:33:23.678 05:52:27 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:23.678 05:52:27 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:33:23.678 05:52:27 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:33:23.678 05:52:27 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:33:23.678 05:52:27 -- common/autotest_common.sh@712 -- $ xtrace_disable 00:33:23.678 05:52:27 -- common/autotest_common.sh@10 -- $ set +x 00:33:23.678 05:52:27 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:33:23.678 05:52:27 -- spdk/autopackage.sh@36 -- $ [[ -n '' ]] 00:33:23.678 05:52:27 -- spdk/autopackage.sh@40 -- $ get_config_params 00:33:23.678 05:52:27 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:33:23.678 05:52:27 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:33:23.678 05:52:27 -- common/autotest_common.sh@10 -- $ set +x 00:33:23.678 05:52:27 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:33:23.678 05:52:27 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --enable-lto 00:33:23.678 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:33:23.678 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:33:23.678 Using 'verbs' RDMA provider 00:33:36.147 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:33:48.413 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:33:48.413 Creating mk/config.mk...done. 00:33:48.413 Creating mk/cc.flags.mk...done. 00:33:48.413 Type 'make' to build. 00:33:48.413 05:52:50 -- spdk/autopackage.sh@43 -- $ make -j10 00:33:48.413 make[1]: Nothing to be done for 'all'. 00:33:52.604 The Meson build system 00:33:52.604 Version: 1.4.0 00:33:52.604 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:33:52.604 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:33:52.604 Build type: native build 00:33:52.604 Program cat found: YES (/usr/bin/cat) 00:33:52.604 Project name: DPDK 00:33:52.604 Project version: 23.11.0 00:33:52.604 C compiler for the host machine: cc (gcc 11.4.0 "cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0") 00:33:52.604 C linker for the host machine: cc ld.bfd 2.38 00:33:52.604 Host machine cpu family: x86_64 00:33:52.604 Host machine cpu: x86_64 00:33:52.604 Message: ## Building in Developer Mode ## 00:33:52.604 Program pkg-config found: YES (/usr/bin/pkg-config) 00:33:52.604 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:33:52.604 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:33:52.604 Program python3 found: YES (/usr/bin/python3) 00:33:52.604 Program cat found: YES (/usr/bin/cat) 00:33:52.604 Compiler for C supports arguments -march=native: YES 00:33:52.604 Checking for size of "void *" : 8 00:33:52.604 Checking for size of "void *" : 8 (cached) 00:33:52.604 Library m found: YES 00:33:52.604 Library numa found: YES 00:33:52.604 Has header "numaif.h" : YES 00:33:52.604 Library fdt found: NO 00:33:52.604 Library execinfo found: NO 00:33:52.604 Has header "execinfo.h" : YES 00:33:52.604 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2 00:33:52.604 Run-time dependency libarchive found: NO (tried pkgconfig) 00:33:52.604 Run-time dependency libbsd found: NO (tried pkgconfig) 00:33:52.604 Run-time dependency jansson found: NO (tried pkgconfig) 00:33:52.604 Run-time dependency openssl found: YES 3.0.2 00:33:52.604 Run-time dependency libpcap found: NO (tried pkgconfig) 00:33:52.604 Library pcap found: NO 00:33:52.604 Compiler for C supports arguments -Wcast-qual: YES 00:33:52.604 Compiler for C supports arguments -Wdeprecated: YES 00:33:52.604 Compiler for C supports arguments -Wformat: YES 00:33:52.604 Compiler for C supports arguments -Wformat-nonliteral: YES 00:33:52.604 Compiler for C supports arguments -Wformat-security: YES 00:33:52.604 Compiler for C supports arguments -Wmissing-declarations: YES 00:33:52.604 Compiler for C supports arguments -Wmissing-prototypes: YES 00:33:52.604 Compiler for C supports arguments -Wnested-externs: YES 00:33:52.604 Compiler for C supports arguments -Wold-style-definition: YES 00:33:52.604 Compiler for C supports arguments -Wpointer-arith: YES 00:33:52.604 Compiler for C supports arguments -Wsign-compare: YES 00:33:52.604 Compiler for C supports arguments -Wstrict-prototypes: YES 00:33:52.604 Compiler for C supports arguments -Wundef: YES 00:33:52.604 Compiler for C supports arguments -Wwrite-strings: YES 00:33:52.604 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:33:52.604 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:33:52.604 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:33:52.604 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:33:52.604 Program objdump found: YES (/usr/bin/objdump) 00:33:52.604 Compiler for C supports arguments -mavx512f: YES 00:33:52.604 Checking if "AVX512 checking" compiles: YES 00:33:52.604 Fetching value of define "__SSE4_2__" : 1 00:33:52.604 Fetching value of define "__AES__" : 1 00:33:52.604 Fetching value of define "__AVX__" : 1 00:33:52.604 Fetching value of define "__AVX2__" : 1 00:33:52.604 Fetching value of define "__AVX512BW__" : (undefined) 00:33:52.604 Fetching value of define "__AVX512CD__" : (undefined) 00:33:52.604 Fetching value of define "__AVX512DQ__" : (undefined) 00:33:52.604 Fetching value of define "__AVX512F__" : (undefined) 00:33:52.604 Fetching value of define "__AVX512VL__" : (undefined) 00:33:52.604 Fetching value of define "__PCLMUL__" : 1 00:33:52.604 Fetching value of define "__RDRND__" : 1 00:33:52.604 Fetching value of define "__RDSEED__" : 1 00:33:52.604 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:33:52.604 Fetching value of define "__znver1__" : (undefined) 00:33:52.604 Fetching value of define "__znver2__" : (undefined) 00:33:52.604 Fetching value of define "__znver3__" : (undefined) 00:33:52.604 Fetching value of define "__znver4__" : (undefined) 00:33:52.604 Compiler for C supports arguments -ffat-lto-objects: YES 00:33:52.604 Library asan found: YES 00:33:52.604 Compiler for C supports arguments -Wno-format-truncation: YES 00:33:52.604 Message: lib/log: Defining dependency "log" 00:33:52.604 Message: lib/kvargs: Defining dependency "kvargs" 00:33:52.604 Message: lib/telemetry: Defining dependency "telemetry" 00:33:52.604 Library rt found: YES 00:33:52.604 Checking for function "getentropy" : NO 00:33:52.605 Message: lib/eal: Defining dependency "eal" 00:33:52.605 Message: lib/ring: Defining dependency "ring" 00:33:52.605 Message: lib/rcu: Defining dependency "rcu" 00:33:52.605 Message: lib/mempool: Defining dependency "mempool" 00:33:52.605 Message: lib/mbuf: Defining dependency "mbuf" 00:33:52.605 Fetching value of define "__PCLMUL__" : 1 (cached) 00:33:52.605 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:33:52.605 Compiler for C supports arguments -mpclmul: YES 00:33:52.605 Compiler for C supports arguments -maes: YES 00:33:52.605 Compiler for C supports arguments -mavx512f: YES (cached) 00:33:52.605 Compiler for C supports arguments -mavx512bw: YES 00:33:52.605 Compiler for C supports arguments -mavx512dq: YES 00:33:52.605 Compiler for C supports arguments -mavx512vl: YES 00:33:52.605 Compiler for C supports arguments -mvpclmulqdq: YES 00:33:52.605 Compiler for C supports arguments -mavx2: YES 00:33:52.605 Compiler for C supports arguments -mavx: YES 00:33:52.605 Message: lib/net: Defining dependency "net" 00:33:52.605 Message: lib/meter: Defining dependency "meter" 00:33:52.605 Message: lib/ethdev: Defining dependency "ethdev" 00:33:52.605 Message: lib/pci: Defining dependency "pci" 00:33:52.605 Message: lib/cmdline: Defining dependency "cmdline" 00:33:52.605 Message: lib/hash: Defining dependency "hash" 00:33:52.605 Message: lib/timer: Defining dependency "timer" 00:33:52.605 Message: lib/compressdev: Defining dependency "compressdev" 00:33:52.605 Message: lib/cryptodev: Defining dependency "cryptodev" 00:33:52.605 Message: lib/dmadev: Defining dependency "dmadev" 00:33:52.605 Compiler for C supports arguments -Wno-cast-qual: YES 00:33:52.605 Message: lib/power: Defining dependency "power" 00:33:52.605 Message: lib/reorder: Defining dependency "reorder" 00:33:52.605 Message: lib/security: Defining dependency "security" 00:33:52.605 Has header "linux/userfaultfd.h" : YES 00:33:52.605 Has header "linux/vduse.h" : YES 00:33:52.605 Message: lib/vhost: Defining dependency "vhost" 00:33:52.605 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:33:52.605 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:33:52.605 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:33:52.605 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:33:52.605 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:33:52.605 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:33:52.605 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:33:52.605 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:33:52.605 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:33:52.605 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:33:52.605 Program doxygen found: YES (/usr/bin/doxygen) 00:33:52.605 Configuring doxy-api-html.conf using configuration 00:33:52.605 Configuring doxy-api-man.conf using configuration 00:33:52.605 Program mandb found: YES (/usr/bin/mandb) 00:33:52.605 Program sphinx-build found: NO 00:33:52.605 Configuring rte_build_config.h using configuration 00:33:52.605 Message: 00:33:52.605 ================= 00:33:52.605 Applications Enabled 00:33:52.605 ================= 00:33:52.605 00:33:52.605 apps: 00:33:52.605 00:33:52.605 00:33:52.605 Message: 00:33:52.605 ================= 00:33:52.605 Libraries Enabled 00:33:52.605 ================= 00:33:52.605 00:33:52.605 libs: 00:33:52.605 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:33:52.605 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:33:52.605 cryptodev, dmadev, power, reorder, security, vhost, 00:33:52.605 00:33:52.605 Message: 00:33:52.605 =============== 00:33:52.605 Drivers Enabled 00:33:52.605 =============== 00:33:52.605 00:33:52.605 common: 00:33:52.605 00:33:52.605 bus: 00:33:52.605 pci, vdev, 00:33:52.605 mempool: 00:33:52.605 ring, 00:33:52.605 dma: 00:33:52.605 00:33:52.605 net: 00:33:52.605 00:33:52.605 crypto: 00:33:52.605 00:33:52.605 compress: 00:33:52.605 00:33:52.605 vdpa: 00:33:52.605 00:33:52.605 00:33:52.605 Message: 00:33:52.605 ================= 00:33:52.605 Content Skipped 00:33:52.605 ================= 00:33:52.605 00:33:52.605 apps: 00:33:52.605 dumpcap: explicitly disabled via build config 00:33:52.605 graph: explicitly disabled via build config 00:33:52.605 pdump: explicitly disabled via build config 00:33:52.605 proc-info: explicitly disabled via build config 00:33:52.605 test-acl: explicitly disabled via build config 00:33:52.605 test-bbdev: explicitly disabled via build config 00:33:52.605 test-cmdline: explicitly disabled via build config 00:33:52.605 test-compress-perf: explicitly disabled via build config 00:33:52.605 test-crypto-perf: explicitly disabled via build config 00:33:52.605 test-dma-perf: explicitly disabled via build config 00:33:52.605 test-eventdev: explicitly disabled via build config 00:33:52.605 test-fib: explicitly disabled via build config 00:33:52.605 test-flow-perf: explicitly disabled via build config 00:33:52.605 test-gpudev: explicitly disabled via build config 00:33:52.605 test-mldev: explicitly disabled via build config 00:33:52.605 test-pipeline: explicitly disabled via build config 00:33:52.605 test-pmd: explicitly disabled via build config 00:33:52.605 test-regex: explicitly disabled via build config 00:33:52.605 test-sad: explicitly disabled via build config 00:33:52.605 test-security-perf: explicitly disabled via build config 00:33:52.605 00:33:52.605 libs: 00:33:52.605 metrics: explicitly disabled via build config 00:33:52.605 acl: explicitly disabled via build config 00:33:52.605 bbdev: explicitly disabled via build config 00:33:52.605 bitratestats: explicitly disabled via build config 00:33:52.605 bpf: explicitly disabled via build config 00:33:52.605 cfgfile: explicitly disabled via build config 00:33:52.605 distributor: explicitly disabled via build config 00:33:52.605 efd: explicitly disabled via build config 00:33:52.605 eventdev: explicitly disabled via build config 00:33:52.605 dispatcher: explicitly disabled via build config 00:33:52.605 gpudev: explicitly disabled via build config 00:33:52.605 gro: explicitly disabled via build config 00:33:52.605 gso: explicitly disabled via build config 00:33:52.605 ip_frag: explicitly disabled via build config 00:33:52.605 jobstats: explicitly disabled via build config 00:33:52.605 latencystats: explicitly disabled via build config 00:33:52.605 lpm: explicitly disabled via build config 00:33:52.605 member: explicitly disabled via build config 00:33:52.605 pcapng: explicitly disabled via build config 00:33:52.605 rawdev: explicitly disabled via build config 00:33:52.605 regexdev: explicitly disabled via build config 00:33:52.605 mldev: explicitly disabled via build config 00:33:52.605 rib: explicitly disabled via build config 00:33:52.605 sched: explicitly disabled via build config 00:33:52.605 stack: explicitly disabled via build config 00:33:52.605 ipsec: explicitly disabled via build config 00:33:52.605 pdcp: explicitly disabled via build config 00:33:52.605 fib: explicitly disabled via build config 00:33:52.605 port: explicitly disabled via build config 00:33:52.605 pdump: explicitly disabled via build config 00:33:52.605 table: explicitly disabled via build config 00:33:52.605 pipeline: explicitly disabled via build config 00:33:52.605 graph: explicitly disabled via build config 00:33:52.605 node: explicitly disabled via build config 00:33:52.605 00:33:52.605 drivers: 00:33:52.605 common/cpt: not in enabled drivers build config 00:33:52.605 common/dpaax: not in enabled drivers build config 00:33:52.605 common/iavf: not in enabled drivers build config 00:33:52.605 common/idpf: not in enabled drivers build config 00:33:52.605 common/mvep: not in enabled drivers build config 00:33:52.605 common/octeontx: not in enabled drivers build config 00:33:52.605 bus/auxiliary: not in enabled drivers build config 00:33:52.605 bus/cdx: not in enabled drivers build config 00:33:52.605 bus/dpaa: not in enabled drivers build config 00:33:52.605 bus/fslmc: not in enabled drivers build config 00:33:52.605 bus/ifpga: not in enabled drivers build config 00:33:52.605 bus/platform: not in enabled drivers build config 00:33:52.605 bus/vmbus: not in enabled drivers build config 00:33:52.605 common/cnxk: not in enabled drivers build config 00:33:52.605 common/mlx5: not in enabled drivers build config 00:33:52.605 common/nfp: not in enabled drivers build config 00:33:52.605 common/qat: not in enabled drivers build config 00:33:52.605 common/sfc_efx: not in enabled drivers build config 00:33:52.605 mempool/bucket: not in enabled drivers build config 00:33:52.605 mempool/cnxk: not in enabled drivers build config 00:33:52.605 mempool/dpaa: not in enabled drivers build config 00:33:52.605 mempool/dpaa2: not in enabled drivers build config 00:33:52.605 mempool/octeontx: not in enabled drivers build config 00:33:52.605 mempool/stack: not in enabled drivers build config 00:33:52.605 dma/cnxk: not in enabled drivers build config 00:33:52.605 dma/dpaa: not in enabled drivers build config 00:33:52.605 dma/dpaa2: not in enabled drivers build config 00:33:52.605 dma/hisilicon: not in enabled drivers build config 00:33:52.605 dma/idxd: not in enabled drivers build config 00:33:52.605 dma/ioat: not in enabled drivers build config 00:33:52.605 dma/skeleton: not in enabled drivers build config 00:33:52.605 net/af_packet: not in enabled drivers build config 00:33:52.605 net/af_xdp: not in enabled drivers build config 00:33:52.605 net/ark: not in enabled drivers build config 00:33:52.605 net/atlantic: not in enabled drivers build config 00:33:52.605 net/avp: not in enabled drivers build config 00:33:52.605 net/axgbe: not in enabled drivers build config 00:33:52.605 net/bnx2x: not in enabled drivers build config 00:33:52.605 net/bnxt: not in enabled drivers build config 00:33:52.605 net/bonding: not in enabled drivers build config 00:33:52.605 net/cnxk: not in enabled drivers build config 00:33:52.606 net/cpfl: not in enabled drivers build config 00:33:52.606 net/cxgbe: not in enabled drivers build config 00:33:52.606 net/dpaa: not in enabled drivers build config 00:33:52.606 net/dpaa2: not in enabled drivers build config 00:33:52.606 net/e1000: not in enabled drivers build config 00:33:52.606 net/ena: not in enabled drivers build config 00:33:52.606 net/enetc: not in enabled drivers build config 00:33:52.606 net/enetfec: not in enabled drivers build config 00:33:52.606 net/enic: not in enabled drivers build config 00:33:52.606 net/failsafe: not in enabled drivers build config 00:33:52.606 net/fm10k: not in enabled drivers build config 00:33:52.606 net/gve: not in enabled drivers build config 00:33:52.606 net/hinic: not in enabled drivers build config 00:33:52.606 net/hns3: not in enabled drivers build config 00:33:52.606 net/i40e: not in enabled drivers build config 00:33:52.606 net/iavf: not in enabled drivers build config 00:33:52.606 net/ice: not in enabled drivers build config 00:33:52.606 net/idpf: not in enabled drivers build config 00:33:52.606 net/igc: not in enabled drivers build config 00:33:52.606 net/ionic: not in enabled drivers build config 00:33:52.606 net/ipn3ke: not in enabled drivers build config 00:33:52.606 net/ixgbe: not in enabled drivers build config 00:33:52.606 net/mana: not in enabled drivers build config 00:33:52.606 net/memif: not in enabled drivers build config 00:33:52.606 net/mlx4: not in enabled drivers build config 00:33:52.606 net/mlx5: not in enabled drivers build config 00:33:52.606 net/mvneta: not in enabled drivers build config 00:33:52.606 net/mvpp2: not in enabled drivers build config 00:33:52.606 net/netvsc: not in enabled drivers build config 00:33:52.606 net/nfb: not in enabled drivers build config 00:33:52.606 net/nfp: not in enabled drivers build config 00:33:52.606 net/ngbe: not in enabled drivers build config 00:33:52.606 net/null: not in enabled drivers build config 00:33:52.606 net/octeontx: not in enabled drivers build config 00:33:52.606 net/octeon_ep: not in enabled drivers build config 00:33:52.606 net/pcap: not in enabled drivers build config 00:33:52.606 net/pfe: not in enabled drivers build config 00:33:52.606 net/qede: not in enabled drivers build config 00:33:52.606 net/ring: not in enabled drivers build config 00:33:52.606 net/sfc: not in enabled drivers build config 00:33:52.606 net/softnic: not in enabled drivers build config 00:33:52.606 net/tap: not in enabled drivers build config 00:33:52.606 net/thunderx: not in enabled drivers build config 00:33:52.606 net/txgbe: not in enabled drivers build config 00:33:52.606 net/vdev_netvsc: not in enabled drivers build config 00:33:52.606 net/vhost: not in enabled drivers build config 00:33:52.606 net/virtio: not in enabled drivers build config 00:33:52.606 net/vmxnet3: not in enabled drivers build config 00:33:52.606 raw/*: missing internal dependency, "rawdev" 00:33:52.606 crypto/armv8: not in enabled drivers build config 00:33:52.606 crypto/bcmfs: not in enabled drivers build config 00:33:52.606 crypto/caam_jr: not in enabled drivers build config 00:33:52.606 crypto/ccp: not in enabled drivers build config 00:33:52.606 crypto/cnxk: not in enabled drivers build config 00:33:52.606 crypto/dpaa_sec: not in enabled drivers build config 00:33:52.606 crypto/dpaa2_sec: not in enabled drivers build config 00:33:52.606 crypto/ipsec_mb: not in enabled drivers build config 00:33:52.606 crypto/mlx5: not in enabled drivers build config 00:33:52.606 crypto/mvsam: not in enabled drivers build config 00:33:52.606 crypto/nitrox: not in enabled drivers build config 00:33:52.606 crypto/null: not in enabled drivers build config 00:33:52.606 crypto/octeontx: not in enabled drivers build config 00:33:52.606 crypto/openssl: not in enabled drivers build config 00:33:52.606 crypto/scheduler: not in enabled drivers build config 00:33:52.606 crypto/uadk: not in enabled drivers build config 00:33:52.606 crypto/virtio: not in enabled drivers build config 00:33:52.606 compress/isal: not in enabled drivers build config 00:33:52.606 compress/mlx5: not in enabled drivers build config 00:33:52.606 compress/octeontx: not in enabled drivers build config 00:33:52.606 compress/zlib: not in enabled drivers build config 00:33:52.606 regex/*: missing internal dependency, "regexdev" 00:33:52.606 ml/*: missing internal dependency, "mldev" 00:33:52.606 vdpa/ifc: not in enabled drivers build config 00:33:52.606 vdpa/mlx5: not in enabled drivers build config 00:33:52.606 vdpa/nfp: not in enabled drivers build config 00:33:52.606 vdpa/sfc: not in enabled drivers build config 00:33:52.606 event/*: missing internal dependency, "eventdev" 00:33:52.606 baseband/*: missing internal dependency, "bbdev" 00:33:52.606 gpu/*: missing internal dependency, "gpudev" 00:33:52.606 00:33:52.606 00:33:52.606 Build targets in project: 85 00:33:52.606 00:33:52.606 DPDK 23.11.0 00:33:52.606 00:33:52.606 User defined options 00:33:52.606 default_library : static 00:33:52.606 libdir : lib 00:33:52.606 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:33:52.606 b_lto : true 00:33:52.606 b_sanitize : address 00:33:52.606 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon 00:33:52.606 c_link_args : 00:33:52.606 cpu_instruction_set: native 00:33:52.606 disable_apps : test-eventdev,test-compress-perf,pdump,test-crypto-perf,test-pmd,test-flow-perf,test-acl,test-sad,graph,proc-info,test-bbdev,test-mldev,test-gpudev,test-fib,test-cmdline,test-security-perf,dumpcap,test-pipeline,test,test-regex,test-dma-perf 00:33:52.606 disable_libs : node,lpm,acl,pdump,cfgfile,efd,latencystats,distributor,bbdev,eventdev,port,bitratestats,pdcp,bpf,graph,member,mldev,stack,pcapng,gro,fib,table,regexdev,dispatcher,sched,ipsec,metrics,gso,jobstats,pipeline,rib,ip_frag,rawdev,gpudev 00:33:52.606 enable_docs : false 00:33:52.606 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:33:52.606 enable_kmods : false 00:33:52.606 tests : false 00:33:52.606 00:33:52.606 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:33:52.865 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:33:53.124 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:33:53.124 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:33:53.124 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:33:53.124 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:33:53.124 [5/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:33:53.124 [6/265] Linking static target lib/librte_kvargs.a 00:33:53.124 [7/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:33:53.124 [8/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:33:53.383 [9/265] Linking static target lib/librte_log.a 00:33:53.383 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:33:53.383 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:33:53.383 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:33:53.642 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:33:53.642 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:33:53.642 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:33:53.642 [16/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:33:53.642 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:33:53.901 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:33:53.901 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:33:53.901 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:33:53.901 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:33:53.901 [22/265] Linking target lib/librte_log.so.24.0 00:33:54.159 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:33:54.159 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:33:54.159 [25/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:33:54.159 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:33:54.418 [27/265] Linking target lib/librte_kvargs.so.24.0 00:33:54.418 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:33:54.418 [29/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:33:54.418 [30/265] Linking static target lib/librte_telemetry.a 00:33:54.418 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:33:54.418 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:33:54.418 [33/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:33:54.418 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:33:54.418 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:33:54.418 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:33:54.675 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:33:54.675 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:33:54.675 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:33:54.675 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:33:54.675 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:33:54.933 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:33:54.933 [43/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:33:54.933 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:33:55.191 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:33:55.191 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:33:55.191 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:33:55.191 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:33:55.191 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:33:55.191 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:33:55.450 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:33:55.450 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:33:55.450 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:33:55.450 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:33:55.450 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:33:55.708 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:33:55.708 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:33:55.708 [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:33:55.708 [59/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:33:55.708 [60/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:33:55.708 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:33:55.708 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:33:55.708 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:33:55.966 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:33:55.966 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:33:55.966 [66/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:33:55.966 [67/265] Linking target lib/librte_telemetry.so.24.0 00:33:56.225 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:33:56.225 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:33:56.225 [70/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:33:56.225 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:33:56.225 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:33:56.225 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:33:56.225 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:33:56.225 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:33:56.225 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:33:56.225 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:33:56.483 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:33:56.483 [79/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:33:56.483 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:33:56.483 [81/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:33:56.742 [82/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:33:56.742 [83/265] Linking static target lib/librte_ring.a 00:33:56.742 [84/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:33:56.742 [85/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:33:57.000 [86/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:33:57.000 [87/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:33:57.000 [88/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:33:57.000 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:33:57.259 [90/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:33:57.259 [91/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:33:57.259 [92/265] Linking static target lib/librte_eal.a 00:33:57.259 [93/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:33:57.259 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:33:57.259 [95/265] Linking static target lib/librte_mempool.a 00:33:57.519 [96/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:33:57.519 [97/265] Linking static target lib/librte_rcu.a 00:33:57.519 [98/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:33:57.519 [99/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:33:57.519 [100/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:33:57.519 [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:33:57.519 [102/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:33:57.519 [103/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:33:57.778 [104/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:33:57.778 [105/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:33:57.778 [106/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:33:57.778 [107/265] Linking static target lib/librte_net.a 00:33:57.778 [108/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:33:57.778 [109/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:33:57.778 [110/265] Linking static target lib/librte_meter.a 00:33:58.037 [111/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:33:58.037 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:33:58.037 [113/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:33:58.037 [114/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:33:58.296 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:33:58.296 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:33:58.554 [117/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:33:58.554 [118/265] Linking static target lib/librte_mbuf.a 00:33:58.554 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:33:58.812 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:33:59.071 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:33:59.071 [122/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:33:59.071 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:33:59.071 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:33:59.071 [125/265] Linking static target lib/librte_pci.a 00:33:59.330 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:33:59.330 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:33:59.330 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:33:59.330 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:33:59.330 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:33:59.330 [131/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:33:59.330 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:33:59.587 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:33:59.587 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:33:59.587 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:33:59.587 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:33:59.587 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:33:59.587 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:33:59.587 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:33:59.587 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:33:59.587 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:33:59.845 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:33:59.845 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:33:59.845 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:33:59.845 [145/265] Linking static target lib/librte_cmdline.a 00:34:00.104 [146/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:34:00.104 [147/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:34:00.363 [148/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:34:00.363 [149/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:34:00.363 [150/265] Linking static target lib/librte_timer.a 00:34:00.641 [151/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:34:00.641 [152/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:34:00.641 [153/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:34:00.641 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:34:00.641 [155/265] Linking static target lib/librte_compressdev.a 00:34:00.641 [156/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:34:00.911 [157/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:34:00.911 [158/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:34:00.911 [159/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:34:00.911 [160/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:34:00.911 [161/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:34:01.170 [162/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:34:01.170 [163/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:34:01.428 [164/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:34:01.428 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:34:01.428 [166/265] Linking static target lib/librte_dmadev.a 00:34:01.428 [167/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:34:01.686 [168/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:34:01.686 [169/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:34:01.686 [170/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:34:01.945 [171/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:34:01.945 [172/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:34:01.945 [173/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:34:01.945 [174/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:34:02.204 [175/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:34:02.204 [176/265] Linking static target lib/librte_power.a 00:34:02.463 [177/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:34:02.463 [178/265] Linking static target lib/librte_reorder.a 00:34:02.463 [179/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:34:02.463 [180/265] Linking static target lib/librte_security.a 00:34:02.463 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:34:02.463 [182/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:34:02.721 [183/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:34:02.721 [184/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:34:02.721 [185/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:34:02.721 [186/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:34:03.288 [187/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:34:03.288 [188/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:34:03.288 [189/265] Linking static target lib/librte_cryptodev.a 00:34:03.288 [190/265] Linking static target lib/librte_ethdev.a 00:34:03.548 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:34:03.548 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:34:03.548 [193/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:34:03.548 [194/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:34:03.548 [195/265] Linking static target lib/librte_hash.a 00:34:03.806 [196/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:34:04.066 [197/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:34:04.066 [198/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:34:04.325 [199/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:34:04.325 [200/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:34:04.325 [201/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:34:04.325 [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:34:04.584 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:34:04.842 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:34:04.842 [205/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:34:04.842 [206/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:34:04.842 [207/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:34:05.100 [208/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:34:05.100 [209/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:34:05.100 [210/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:34:05.100 [211/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:34:05.100 [212/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:34:05.100 [213/265] Linking static target drivers/librte_bus_vdev.a 00:34:05.100 [214/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:34:05.360 [215/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:34:05.360 [216/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:34:05.360 [217/265] Linking static target drivers/librte_bus_pci.a 00:34:05.360 [218/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:34:05.360 [219/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:34:05.360 [220/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:34:05.360 [221/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:34:05.619 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:34:05.619 [223/265] Linking static target drivers/librte_mempool_ring.a 00:34:05.619 [224/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:34:05.619 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:34:08.908 [226/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:34:12.198 [227/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:34:14.103 [228/265] Linking target lib/librte_eal.so.24.0 00:34:14.103 lto-wrapper: warning: using serial compilation of 5 LTRANS jobs 00:34:14.362 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:34:14.362 [230/265] Linking target lib/librte_meter.so.24.0 00:34:14.362 [231/265] Linking target lib/librte_pci.so.24.0 00:34:14.621 [232/265] Linking target lib/librte_ring.so.24.0 00:34:14.621 [233/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:34:14.621 [234/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:34:14.621 [235/265] Linking target drivers/librte_bus_vdev.so.24.0 00:34:14.621 [236/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:34:14.621 [237/265] Linking target lib/librte_timer.so.24.0 00:34:14.879 [238/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:34:14.879 [239/265] Linking target lib/librte_dmadev.so.24.0 00:34:15.137 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:34:15.396 [241/265] Linking target lib/librte_mempool.so.24.0 00:34:15.396 [242/265] Linking target lib/librte_rcu.so.24.0 00:34:15.396 [243/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:34:15.655 [244/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:34:15.915 [245/265] Linking target drivers/librte_bus_pci.so.24.0 00:34:15.915 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:34:17.291 [247/265] Linking target lib/librte_mbuf.so.24.0 00:34:17.550 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:34:17.810 [249/265] Linking target lib/librte_reorder.so.24.0 00:34:18.069 [250/265] Linking target lib/librte_compressdev.so.24.0 00:34:18.328 [251/265] Linking target lib/librte_net.so.24.0 00:34:18.587 [252/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:34:19.562 [253/265] Linking target lib/librte_cmdline.so.24.0 00:34:19.821 [254/265] Linking target lib/librte_cryptodev.so.24.0 00:34:19.821 [255/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:34:20.080 [256/265] Linking target lib/librte_security.so.24.0 00:34:22.614 [257/265] Linking target lib/librte_hash.so.24.0 00:34:22.614 [258/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:34:29.180 [259/265] Linking target lib/librte_ethdev.so.24.0 00:34:29.180 lto-wrapper: warning: using serial compilation of 6 LTRANS jobs 00:34:29.439 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:34:31.345 [261/265] Linking target lib/librte_power.so.24.0 00:34:33.880 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:34:33.880 [263/265] Linking static target lib/librte_vhost.a 00:34:35.786 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:35:22.493 [265/265] Linking target lib/librte_vhost.so.24.0 00:35:22.493 lto-wrapper: warning: using serial compilation of 8 LTRANS jobs 00:35:22.493 INFO: autodetecting backend as ninja 00:35:22.493 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:35:22.493 CC lib/log/log.o 00:35:22.493 CC lib/ut_mock/mock.o 00:35:22.493 CC lib/ut/ut.o 00:35:22.493 CC lib/log/log_flags.o 00:35:22.493 CC lib/log/log_deprecated.o 00:35:22.493 LIB libspdk_ut_mock.a 00:35:22.493 LIB libspdk_log.a 00:35:22.493 LIB libspdk_ut.a 00:35:22.493 CC lib/dma/dma.o 00:35:22.493 CC lib/ioat/ioat.o 00:35:22.493 CC lib/util/base64.o 00:35:22.493 CC lib/util/bit_array.o 00:35:22.493 CC lib/util/cpuset.o 00:35:22.493 CXX lib/trace_parser/trace.o 00:35:22.493 CC lib/util/crc16.o 00:35:22.493 CC lib/util/crc32c.o 00:35:22.493 CC lib/util/crc32.o 00:35:22.493 CC lib/vfio_user/host/vfio_user_pci.o 00:35:22.493 CC lib/vfio_user/host/vfio_user.o 00:35:22.493 CC lib/util/crc32_ieee.o 00:35:22.493 CC lib/util/crc64.o 00:35:22.493 CC lib/util/dif.o 00:35:22.493 LIB libspdk_dma.a 00:35:22.493 CC lib/util/fd.o 00:35:22.493 CC lib/util/file.o 00:35:22.493 CC lib/util/hexlify.o 00:35:22.493 LIB libspdk_ioat.a 00:35:22.493 CC lib/util/iov.o 00:35:22.493 CC lib/util/math.o 00:35:22.493 CC lib/util/pipe.o 00:35:22.493 CC lib/util/strerror_tls.o 00:35:22.493 CC lib/util/string.o 00:35:22.493 LIB libspdk_vfio_user.a 00:35:22.493 CC lib/util/uuid.o 00:35:22.493 CC lib/util/fd_group.o 00:35:22.493 CC lib/util/xor.o 00:35:22.493 CC lib/util/zipf.o 00:35:22.493 LIB libspdk_util.a 00:35:22.493 LIB libspdk_trace_parser.a 00:35:22.493 CC lib/json/json_parse.o 00:35:22.493 CC lib/json/json_util.o 00:35:22.493 CC lib/idxd/idxd_user.o 00:35:22.493 CC lib/json/json_write.o 00:35:22.493 CC lib/idxd/idxd.o 00:35:22.493 CC lib/vmd/vmd.o 00:35:22.493 CC lib/vmd/led.o 00:35:22.493 CC lib/rdma/common.o 00:35:22.493 CC lib/conf/conf.o 00:35:22.493 CC lib/env_dpdk/env.o 00:35:22.493 CC lib/rdma/rdma_verbs.o 00:35:22.493 CC lib/env_dpdk/memory.o 00:35:22.493 CC lib/env_dpdk/pci.o 00:35:22.493 LIB libspdk_conf.a 00:35:22.493 CC lib/env_dpdk/init.o 00:35:22.493 CC lib/env_dpdk/threads.o 00:35:22.493 CC lib/env_dpdk/pci_ioat.o 00:35:22.493 LIB libspdk_json.a 00:35:22.493 CC lib/env_dpdk/pci_virtio.o 00:35:22.493 LIB libspdk_idxd.a 00:35:22.493 LIB libspdk_vmd.a 00:35:22.493 LIB libspdk_rdma.a 00:35:22.493 CC lib/env_dpdk/pci_vmd.o 00:35:22.493 CC lib/env_dpdk/pci_idxd.o 00:35:22.493 CC lib/env_dpdk/pci_event.o 00:35:22.493 CC lib/env_dpdk/sigbus_handler.o 00:35:22.493 CC lib/jsonrpc/jsonrpc_server.o 00:35:22.493 CC lib/env_dpdk/pci_dpdk.o 00:35:22.493 CC lib/env_dpdk/pci_dpdk_2207.o 00:35:22.493 CC lib/env_dpdk/pci_dpdk_2211.o 00:35:22.493 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:35:22.493 CC lib/jsonrpc/jsonrpc_client.o 00:35:22.493 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:35:22.493 LIB libspdk_jsonrpc.a 00:35:22.493 CC lib/rpc/rpc.o 00:35:22.493 LIB libspdk_rpc.a 00:35:22.493 LIB libspdk_env_dpdk.a 00:35:22.493 CC lib/trace/trace_flags.o 00:35:22.493 CC lib/notify/notify.o 00:35:22.493 CC lib/trace/trace.o 00:35:22.493 CC lib/notify/notify_rpc.o 00:35:22.493 CC lib/trace/trace_rpc.o 00:35:22.493 CC lib/sock/sock.o 00:35:22.494 CC lib/sock/sock_rpc.o 00:35:22.494 LIB libspdk_notify.a 00:35:22.494 LIB libspdk_trace.a 00:35:22.494 LIB libspdk_sock.a 00:35:22.494 CC lib/thread/thread.o 00:35:22.494 CC lib/thread/iobuf.o 00:35:22.494 CC lib/nvme/nvme_ctrlr_cmd.o 00:35:22.494 CC lib/nvme/nvme_ctrlr.o 00:35:22.494 CC lib/nvme/nvme_fabric.o 00:35:22.494 CC lib/nvme/nvme_ns_cmd.o 00:35:22.494 CC lib/nvme/nvme_ns.o 00:35:22.494 CC lib/nvme/nvme_qpair.o 00:35:22.494 CC lib/nvme/nvme_pcie_common.o 00:35:22.494 CC lib/nvme/nvme_pcie.o 00:35:22.494 CC lib/nvme/nvme.o 00:35:22.494 CC lib/nvme/nvme_quirks.o 00:35:22.494 LIB libspdk_thread.a 00:35:22.494 CC lib/nvme/nvme_transport.o 00:35:22.494 CC lib/nvme/nvme_discovery.o 00:35:22.494 CC lib/accel/accel.o 00:35:22.494 CC lib/blob/blobstore.o 00:35:22.494 CC lib/init/json_config.o 00:35:22.494 CC lib/init/subsystem.o 00:35:22.494 CC lib/init/subsystem_rpc.o 00:35:22.494 CC lib/init/rpc.o 00:35:22.494 CC lib/accel/accel_rpc.o 00:35:22.494 CC lib/virtio/virtio.o 00:35:22.494 CC lib/virtio/virtio_vhost_user.o 00:35:22.494 LIB libspdk_init.a 00:35:22.494 CC lib/virtio/virtio_vfio_user.o 00:35:22.494 CC lib/virtio/virtio_pci.o 00:35:22.494 CC lib/blob/request.o 00:35:22.494 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:35:22.494 CC lib/event/app.o 00:35:22.494 CC lib/event/reactor.o 00:35:22.494 CC lib/event/log_rpc.o 00:35:22.494 CC lib/event/app_rpc.o 00:35:22.494 CC lib/accel/accel_sw.o 00:35:22.494 CC lib/blob/zeroes.o 00:35:22.494 LIB libspdk_virtio.a 00:35:22.494 CC lib/blob/blob_bs_dev.o 00:35:22.494 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:35:22.494 CC lib/nvme/nvme_tcp.o 00:35:22.494 CC lib/nvme/nvme_opal.o 00:35:22.494 LIB libspdk_accel.a 00:35:22.494 CC lib/nvme/nvme_io_msg.o 00:35:22.494 CC lib/nvme/nvme_poll_group.o 00:35:22.494 CC lib/nvme/nvme_zns.o 00:35:22.494 CC lib/event/scheduler_static.o 00:35:22.494 CC lib/nvme/nvme_cuse.o 00:35:22.494 LIB libspdk_event.a 00:35:22.494 CC lib/bdev/bdev.o 00:35:22.494 CC lib/bdev/bdev_rpc.o 00:35:22.494 CC lib/bdev/bdev_zone.o 00:35:22.494 CC lib/bdev/part.o 00:35:22.494 CC lib/bdev/scsi_nvme.o 00:35:22.494 CC lib/nvme/nvme_vfio_user.o 00:35:22.494 CC lib/nvme/nvme_rdma.o 00:35:22.494 LIB libspdk_blob.a 00:35:22.494 CC lib/blobfs/blobfs.o 00:35:22.494 CC lib/blobfs/tree.o 00:35:22.494 CC lib/lvol/lvol.o 00:35:22.494 LIB libspdk_blobfs.a 00:35:22.494 LIB libspdk_lvol.a 00:35:22.753 LIB libspdk_nvme.a 00:35:22.753 LIB libspdk_bdev.a 00:35:22.753 CC lib/nvmf/ctrlr_discovery.o 00:35:22.753 CC lib/nvmf/ctrlr_bdev.o 00:35:22.753 CC lib/nbd/nbd_rpc.o 00:35:22.753 CC lib/nbd/nbd.o 00:35:22.753 CC lib/nvmf/ctrlr.o 00:35:22.753 CC lib/nvmf/subsystem.o 00:35:22.753 CC lib/nvmf/nvmf_rpc.o 00:35:22.753 CC lib/nvmf/nvmf.o 00:35:22.753 CC lib/ftl/ftl_core.o 00:35:22.753 CC lib/scsi/dev.o 00:35:23.012 CC lib/scsi/lun.o 00:35:23.012 CC lib/nvmf/transport.o 00:35:23.012 CC lib/nvmf/tcp.o 00:35:23.012 CC lib/ftl/ftl_init.o 00:35:23.012 CC lib/ftl/ftl_layout.o 00:35:23.012 LIB libspdk_nbd.a 00:35:23.012 CC lib/scsi/port.o 00:35:23.012 CC lib/nvmf/rdma.o 00:35:23.012 CC lib/ftl/ftl_debug.o 00:35:23.012 CC lib/ftl/ftl_io.o 00:35:23.270 CC lib/ftl/ftl_sb.o 00:35:23.270 CC lib/scsi/scsi.o 00:35:23.270 CC lib/ftl/ftl_l2p.o 00:35:23.270 CC lib/ftl/ftl_l2p_flat.o 00:35:23.270 CC lib/ftl/ftl_nv_cache.o 00:35:23.270 CC lib/ftl/ftl_band.o 00:35:23.270 CC lib/scsi/scsi_bdev.o 00:35:23.270 CC lib/ftl/ftl_band_ops.o 00:35:23.270 CC lib/ftl/ftl_writer.o 00:35:23.270 CC lib/ftl/ftl_rq.o 00:35:23.270 CC lib/ftl/ftl_reloc.o 00:35:23.270 CC lib/scsi/scsi_pr.o 00:35:23.529 CC lib/ftl/ftl_l2p_cache.o 00:35:23.529 CC lib/ftl/ftl_p2l.o 00:35:23.529 CC lib/ftl/mngt/ftl_mngt.o 00:35:23.529 CC lib/scsi/scsi_rpc.o 00:35:23.529 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:35:23.529 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:35:23.529 CC lib/ftl/mngt/ftl_mngt_startup.o 00:35:23.529 CC lib/scsi/task.o 00:35:23.529 CC lib/ftl/mngt/ftl_mngt_md.o 00:35:23.788 CC lib/ftl/mngt/ftl_mngt_misc.o 00:35:23.788 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:35:23.788 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:35:23.788 CC lib/ftl/mngt/ftl_mngt_band.o 00:35:23.788 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:35:23.788 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:35:23.788 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:35:23.788 LIB libspdk_scsi.a 00:35:23.788 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:35:23.788 LIB libspdk_nvmf.a 00:35:23.788 CC lib/ftl/utils/ftl_conf.o 00:35:23.788 CC lib/ftl/utils/ftl_md.o 00:35:23.788 CC lib/ftl/utils/ftl_mempool.o 00:35:23.788 CC lib/ftl/utils/ftl_bitmap.o 00:35:23.788 CC lib/iscsi/conn.o 00:35:23.788 CC lib/vhost/vhost.o 00:35:23.788 CC lib/vhost/vhost_rpc.o 00:35:23.788 CC lib/vhost/vhost_scsi.o 00:35:24.047 CC lib/vhost/vhost_blk.o 00:35:24.047 CC lib/vhost/rte_vhost_user.o 00:35:24.047 CC lib/ftl/utils/ftl_property.o 00:35:24.047 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:35:24.047 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:35:24.047 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:35:24.047 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:35:24.047 CC lib/iscsi/init_grp.o 00:35:24.047 CC lib/iscsi/iscsi.o 00:35:24.306 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:35:24.306 CC lib/iscsi/md5.o 00:35:24.306 CC lib/iscsi/param.o 00:35:24.306 CC lib/iscsi/portal_grp.o 00:35:24.306 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:35:24.306 CC lib/iscsi/tgt_node.o 00:35:24.306 CC lib/iscsi/iscsi_subsystem.o 00:35:24.306 CC lib/iscsi/iscsi_rpc.o 00:35:24.306 CC lib/iscsi/task.o 00:35:24.567 CC lib/ftl/upgrade/ftl_sb_v3.o 00:35:24.567 CC lib/ftl/upgrade/ftl_sb_v5.o 00:35:24.567 CC lib/ftl/nvc/ftl_nvc_dev.o 00:35:24.567 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:35:24.567 CC lib/ftl/base/ftl_base_dev.o 00:35:24.567 CC lib/ftl/base/ftl_base_bdev.o 00:35:24.567 LIB libspdk_vhost.a 00:35:24.825 LIB libspdk_ftl.a 00:35:24.825 LIB libspdk_iscsi.a 00:35:24.825 CC module/env_dpdk/env_dpdk_rpc.o 00:35:24.825 CC module/accel/dsa/accel_dsa.o 00:35:24.826 CC module/blob/bdev/blob_bdev.o 00:35:24.826 CC module/accel/iaa/accel_iaa.o 00:35:24.826 CC module/accel/ioat/accel_ioat.o 00:35:24.826 CC module/sock/posix/posix.o 00:35:24.826 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:35:24.826 CC module/scheduler/gscheduler/gscheduler.o 00:35:24.826 CC module/scheduler/dynamic/scheduler_dynamic.o 00:35:24.826 CC module/accel/error/accel_error.o 00:35:25.084 LIB libspdk_env_dpdk_rpc.a 00:35:25.084 CC module/accel/error/accel_error_rpc.o 00:35:25.084 LIB libspdk_scheduler_dpdk_governor.a 00:35:25.084 LIB libspdk_scheduler_gscheduler.a 00:35:25.084 CC module/accel/ioat/accel_ioat_rpc.o 00:35:25.084 CC module/accel/iaa/accel_iaa_rpc.o 00:35:25.084 CC module/accel/dsa/accel_dsa_rpc.o 00:35:25.084 LIB libspdk_scheduler_dynamic.a 00:35:25.084 LIB libspdk_blob_bdev.a 00:35:25.084 LIB libspdk_accel_error.a 00:35:25.084 LIB libspdk_accel_ioat.a 00:35:25.084 LIB libspdk_accel_iaa.a 00:35:25.084 LIB libspdk_accel_dsa.a 00:35:25.343 CC module/bdev/error/vbdev_error.o 00:35:25.343 CC module/bdev/gpt/gpt.o 00:35:25.343 CC module/bdev/delay/vbdev_delay.o 00:35:25.343 CC module/blobfs/bdev/blobfs_bdev.o 00:35:25.343 CC module/bdev/lvol/vbdev_lvol.o 00:35:25.343 CC module/bdev/malloc/bdev_malloc.o 00:35:25.343 CC module/bdev/null/bdev_null.o 00:35:25.343 CC module/bdev/passthru/vbdev_passthru.o 00:35:25.343 CC module/bdev/nvme/bdev_nvme.o 00:35:25.343 LIB libspdk_sock_posix.a 00:35:25.343 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:35:25.343 CC module/bdev/gpt/vbdev_gpt.o 00:35:25.343 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:35:25.343 CC module/bdev/error/vbdev_error_rpc.o 00:35:25.343 CC module/bdev/null/bdev_null_rpc.o 00:35:25.343 CC module/bdev/delay/vbdev_delay_rpc.o 00:35:25.343 CC module/bdev/malloc/bdev_malloc_rpc.o 00:35:25.343 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:35:25.343 LIB libspdk_bdev_passthru.a 00:35:25.602 LIB libspdk_blobfs_bdev.a 00:35:25.602 LIB libspdk_bdev_error.a 00:35:25.602 LIB libspdk_bdev_gpt.a 00:35:25.602 CC module/bdev/nvme/bdev_nvme_rpc.o 00:35:25.602 CC module/bdev/raid/bdev_raid.o 00:35:25.602 LIB libspdk_bdev_malloc.a 00:35:25.602 CC module/bdev/split/vbdev_split.o 00:35:25.602 LIB libspdk_bdev_delay.a 00:35:25.602 LIB libspdk_bdev_null.a 00:35:25.602 CC module/bdev/zone_block/vbdev_zone_block.o 00:35:25.602 CC module/bdev/nvme/nvme_rpc.o 00:35:25.602 CC module/bdev/aio/bdev_aio.o 00:35:25.602 LIB libspdk_bdev_lvol.a 00:35:25.602 CC module/bdev/ftl/bdev_ftl.o 00:35:25.602 CC module/bdev/ftl/bdev_ftl_rpc.o 00:35:25.602 CC module/bdev/iscsi/bdev_iscsi.o 00:35:25.602 CC module/bdev/split/vbdev_split_rpc.o 00:35:25.861 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:35:25.861 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:35:25.861 CC module/bdev/nvme/bdev_mdns_client.o 00:35:25.861 LIB libspdk_bdev_ftl.a 00:35:25.861 LIB libspdk_bdev_split.a 00:35:25.861 CC module/bdev/aio/bdev_aio_rpc.o 00:35:25.861 CC module/bdev/nvme/vbdev_opal.o 00:35:25.861 CC module/bdev/nvme/vbdev_opal_rpc.o 00:35:25.861 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:35:25.861 LIB libspdk_bdev_iscsi.a 00:35:25.861 LIB libspdk_bdev_zone_block.a 00:35:25.861 CC module/bdev/raid/bdev_raid_rpc.o 00:35:25.861 CC module/bdev/raid/bdev_raid_sb.o 00:35:25.861 CC module/bdev/raid/raid0.o 00:35:25.861 CC module/bdev/raid/raid1.o 00:35:25.861 CC module/bdev/virtio/bdev_virtio_scsi.o 00:35:25.861 LIB libspdk_bdev_aio.a 00:35:26.120 CC module/bdev/virtio/bdev_virtio_blk.o 00:35:26.120 CC module/bdev/raid/concat.o 00:35:26.120 CC module/bdev/virtio/bdev_virtio_rpc.o 00:35:26.120 CC module/bdev/raid/raid5f.o 00:35:26.120 LIB libspdk_bdev_nvme.a 00:35:26.120 LIB libspdk_bdev_virtio.a 00:35:26.378 LIB libspdk_bdev_raid.a 00:35:26.637 CC module/event/subsystems/vmd/vmd.o 00:35:26.637 CC module/event/subsystems/vmd/vmd_rpc.o 00:35:26.637 CC module/event/subsystems/scheduler/scheduler.o 00:35:26.637 CC module/event/subsystems/iobuf/iobuf.o 00:35:26.637 CC module/event/subsystems/sock/sock.o 00:35:26.637 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:35:26.637 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:35:26.637 LIB libspdk_event_sock.a 00:35:26.637 LIB libspdk_event_scheduler.a 00:35:26.637 LIB libspdk_event_vmd.a 00:35:26.637 LIB libspdk_event_vhost_blk.a 00:35:26.637 LIB libspdk_event_iobuf.a 00:35:26.895 CC module/event/subsystems/accel/accel.o 00:35:26.895 LIB libspdk_event_accel.a 00:35:27.154 CC module/event/subsystems/bdev/bdev.o 00:35:27.154 LIB libspdk_event_bdev.a 00:35:27.412 CC module/event/subsystems/scsi/scsi.o 00:35:27.412 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:35:27.412 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:35:27.412 CC module/event/subsystems/nbd/nbd.o 00:35:27.412 LIB libspdk_event_scsi.a 00:35:27.412 LIB libspdk_event_nbd.a 00:35:27.412 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:35:27.412 LIB libspdk_event_nvmf.a 00:35:27.412 CC module/event/subsystems/iscsi/iscsi.o 00:35:27.670 LIB libspdk_event_vhost_scsi.a 00:35:27.670 LIB libspdk_event_iscsi.a 00:35:27.928 CXX app/trace/trace.o 00:35:27.928 TEST_HEADER include/spdk/config.h 00:35:27.928 CXX test/cpp_headers/accel.o 00:35:27.928 CC test/event/event_perf/event_perf.o 00:35:27.928 CC examples/accel/perf/accel_perf.o 00:35:27.928 CC test/dma/test_dma/test_dma.o 00:35:27.928 CC test/app/bdev_svc/bdev_svc.o 00:35:27.928 CC test/blobfs/mkfs/mkfs.o 00:35:27.928 CC test/bdev/bdevio/bdevio.o 00:35:27.928 CC test/accel/dif/dif.o 00:35:27.928 CC test/env/mem_callbacks/mem_callbacks.o 00:35:27.928 LINK event_perf 00:35:27.928 CXX test/cpp_headers/accel_module.o 00:35:28.186 LINK bdev_svc 00:35:28.186 LINK mkfs 00:35:28.186 LINK spdk_trace 00:35:28.186 CXX test/cpp_headers/assert.o 00:35:28.186 LINK accel_perf 00:35:28.186 LINK test_dma 00:35:28.186 LINK dif 00:35:28.186 LINK bdevio 00:35:28.444 CXX test/cpp_headers/barrier.o 00:35:28.444 LINK mem_callbacks 00:35:28.444 CXX test/cpp_headers/base64.o 00:35:28.702 CXX test/cpp_headers/bdev.o 00:35:29.267 CXX test/cpp_headers/bdev_module.o 00:35:29.525 CXX test/cpp_headers/bdev_zone.o 00:35:30.091 CXX test/cpp_headers/bit_array.o 00:35:30.657 CXX test/cpp_headers/bit_pool.o 00:35:30.915 CXX test/cpp_headers/blob.o 00:35:31.482 CXX test/cpp_headers/blob_bdev.o 00:35:32.048 CXX test/cpp_headers/blobfs.o 00:35:32.615 CXX test/cpp_headers/blobfs_bdev.o 00:35:33.181 CXX test/cpp_headers/conf.o 00:35:33.748 CXX test/cpp_headers/config.o 00:35:33.748 CXX test/cpp_headers/cpuset.o 00:35:34.314 CXX test/cpp_headers/crc16.o 00:35:34.573 CC app/trace_record/trace_record.o 00:35:35.139 CXX test/cpp_headers/crc32.o 00:35:35.397 LINK spdk_trace_record 00:35:35.655 CXX test/cpp_headers/crc64.o 00:35:36.630 CXX test/cpp_headers/dif.o 00:35:37.566 CXX test/cpp_headers/dma.o 00:35:38.945 CXX test/cpp_headers/endian.o 00:35:40.322 CXX test/cpp_headers/env.o 00:35:41.699 CXX test/cpp_headers/env_dpdk.o 00:35:43.077 CXX test/cpp_headers/event.o 00:35:44.013 CXX test/cpp_headers/fd.o 00:35:45.391 CXX test/cpp_headers/fd_group.o 00:35:46.326 CXX test/cpp_headers/file.o 00:35:47.703 CXX test/cpp_headers/ftl.o 00:35:49.081 CXX test/cpp_headers/gpt_spec.o 00:35:50.456 CXX test/cpp_headers/hexlify.o 00:35:51.390 CXX test/cpp_headers/histogram_data.o 00:35:52.766 CXX test/cpp_headers/idxd.o 00:35:52.766 CC test/env/vtophys/vtophys.o 00:35:53.705 LINK vtophys 00:35:53.705 CXX test/cpp_headers/idxd_spec.o 00:35:54.662 CXX test/cpp_headers/init.o 00:35:55.234 CC test/event/reactor/reactor.o 00:35:55.801 CXX test/cpp_headers/ioat.o 00:35:56.059 LINK reactor 00:35:56.627 CXX test/cpp_headers/ioat_spec.o 00:35:57.563 CXX test/cpp_headers/iscsi_spec.o 00:35:58.500 CXX test/cpp_headers/json.o 00:35:59.436 CXX test/cpp_headers/jsonrpc.o 00:36:00.814 CXX test/cpp_headers/likely.o 00:36:00.814 CC app/nvmf_tgt/nvmf_main.o 00:36:01.750 CXX test/cpp_headers/log.o 00:36:01.750 LINK nvmf_tgt 00:36:02.686 CXX test/cpp_headers/lvol.o 00:36:04.063 CXX test/cpp_headers/memory.o 00:36:05.437 CXX test/cpp_headers/mmio.o 00:36:06.372 CXX test/cpp_headers/nbd.o 00:36:06.649 CXX test/cpp_headers/notify.o 00:36:08.022 CXX test/cpp_headers/nvme.o 00:36:09.398 CXX test/cpp_headers/nvme_intel.o 00:36:10.332 CXX test/cpp_headers/nvme_ocssd.o 00:36:10.591 CC examples/bdev/hello_world/hello_bdev.o 00:36:11.539 CXX test/cpp_headers/nvme_ocssd_spec.o 00:36:11.809 LINK hello_bdev 00:36:12.744 CXX test/cpp_headers/nvme_spec.o 00:36:14.121 CXX test/cpp_headers/nvme_zns.o 00:36:15.056 CXX test/cpp_headers/nvmf.o 00:36:16.436 CXX test/cpp_headers/nvmf_cmd.o 00:36:17.374 CXX test/cpp_headers/nvmf_fc_spec.o 00:36:18.310 CXX test/cpp_headers/nvmf_spec.o 00:36:19.247 CXX test/cpp_headers/nvmf_transport.o 00:36:20.184 CXX test/cpp_headers/opal.o 00:36:21.563 CXX test/cpp_headers/opal_spec.o 00:36:22.501 CXX test/cpp_headers/pci_ids.o 00:36:23.438 CXX test/cpp_headers/pipe.o 00:36:24.375 CXX test/cpp_headers/queue.o 00:36:24.633 CXX test/cpp_headers/reduce.o 00:36:25.569 CXX test/cpp_headers/rpc.o 00:36:26.946 CXX test/cpp_headers/scheduler.o 00:36:27.883 CXX test/cpp_headers/scsi.o 00:36:28.142 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:36:29.146 CXX test/cpp_headers/scsi_spec.o 00:36:29.146 LINK env_dpdk_post_init 00:36:29.713 CXX test/cpp_headers/sock.o 00:36:31.091 CXX test/cpp_headers/stdinc.o 00:36:32.469 CXX test/cpp_headers/string.o 00:36:33.406 CXX test/cpp_headers/thread.o 00:36:33.665 CC test/event/reactor_perf/reactor_perf.o 00:36:34.601 CXX test/cpp_headers/trace.o 00:36:34.858 LINK reactor_perf 00:36:35.424 CXX test/cpp_headers/trace_parser.o 00:36:36.799 CXX test/cpp_headers/tree.o 00:36:36.799 CXX test/cpp_headers/ublk.o 00:36:38.175 CXX test/cpp_headers/util.o 00:36:39.552 CXX test/cpp_headers/uuid.o 00:36:40.930 CXX test/cpp_headers/version.o 00:36:41.188 CXX test/cpp_headers/vfio_user_pci.o 00:36:42.590 CXX test/cpp_headers/vfio_user_spec.o 00:36:43.966 CXX test/cpp_headers/vhost.o 00:36:45.350 CXX test/cpp_headers/vmd.o 00:36:46.737 CXX test/cpp_headers/xor.o 00:36:48.181 CXX test/cpp_headers/zipf.o 00:36:50.084 CC test/env/memory/memory_ut.o 00:36:56.648 LINK memory_ut 00:37:08.856 CC examples/bdev/bdevperf/bdevperf.o 00:37:12.140 LINK bdevperf 00:37:27.067 CC test/event/app_repeat/app_repeat.o 00:37:27.067 LINK app_repeat 00:37:28.484 CC test/env/pci/pci_ut.o 00:37:30.389 LINK pci_ut 00:37:38.507 CC test/event/scheduler/scheduler.o 00:37:39.074 LINK scheduler 00:37:40.449 CC test/lvol/esnap/esnap.o 00:37:40.718 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:37:41.288 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:37:41.288 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:37:41.856 LINK nvme_fuzz 00:37:42.114 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:37:43.492 LINK vhost_fuzz 00:37:44.869 LINK iscsi_fuzz 00:37:51.439 LINK esnap 00:37:51.439 CC test/nvme/aer/aer.o 00:37:52.818 LINK aer 00:38:31.599 CC test/nvme/reset/reset.o 00:38:31.599 LINK reset 00:38:32.168 CC test/rpc_client/rpc_client_test.o 00:38:33.106 LINK rpc_client_test 00:38:33.365 CC test/thread/poller_perf/poller_perf.o 00:38:34.303 LINK poller_perf 00:38:42.430 CC examples/blob/hello_world/hello_blob.o 00:38:42.690 LINK hello_blob 00:38:42.949 CC examples/blob/cli/blobcli.o 00:38:44.853 LINK blobcli 00:38:45.790 CC test/app/histogram_perf/histogram_perf.o 00:38:46.359 LINK histogram_perf 00:38:48.293 CC app/iscsi_tgt/iscsi_tgt.o 00:38:49.231 LINK iscsi_tgt 00:38:52.521 CC app/spdk_tgt/spdk_tgt.o 00:38:53.458 LINK spdk_tgt 00:38:54.837 CC app/spdk_lspci/spdk_lspci.o 00:38:55.406 LINK spdk_lspci 00:39:07.616 CC test/thread/lock/spdk_lock.o 00:39:10.907 CC test/app/jsoncat/jsoncat.o 00:39:10.907 LINK spdk_lock 00:39:11.167 LINK jsoncat 00:39:13.704 CC test/nvme/sgl/sgl.o 00:39:14.641 LINK sgl 00:39:21.234 CC test/nvme/e2edp/nvme_dp.o 00:39:23.140 LINK nvme_dp 00:39:49.695 CC app/spdk_nvme_perf/perf.o 00:39:49.695 LINK spdk_nvme_perf 00:39:50.263 CC test/app/stub/stub.o 00:39:51.640 LINK stub 00:39:54.173 CC app/spdk_nvme_identify/identify.o 00:39:55.551 CC app/spdk_nvme_discover/discovery_aer.o 00:39:56.930 LINK spdk_nvme_discover 00:39:56.930 LINK spdk_nvme_identify 00:40:15.052 CC app/spdk_top/spdk_top.o 00:40:19.244 LINK spdk_top 00:40:34.128 CC test/nvme/overhead/overhead.o 00:40:34.696 LINK overhead 00:40:52.789 CC app/vhost/vhost.o 00:40:53.357 LINK vhost 00:40:58.658 CC app/spdk_dd/spdk_dd.o 00:40:59.225 LINK spdk_dd 00:40:59.793 CC app/fio/nvme/fio_plugin.o 00:41:01.173 LINK spdk_nvme 00:41:01.432 CC test/nvme/err_injection/err_injection.o 00:41:01.432 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:41:01.432 CC test/nvme/startup/startup.o 00:41:02.001 LINK startup 00:41:02.001 LINK err_injection 00:41:02.001 LINK histogram_ut 00:41:03.379 CC examples/ioat/perf/perf.o 00:41:03.948 LINK ioat_perf 00:41:05.329 CC test/unit/lib/accel/accel.c/accel_ut.o 00:41:11.923 LINK accel_ut 00:41:24.128 CC test/nvme/reserve/reserve.o 00:41:25.064 CC test/nvme/simple_copy/simple_copy.o 00:41:25.064 LINK reserve 00:41:26.442 LINK simple_copy 00:41:29.731 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:41:39.713 CC examples/ioat/verify/verify.o 00:41:41.085 LINK verify 00:41:46.389 LINK bdev_ut 00:41:54.506 CC test/unit/lib/bdev/part.c/part_ut.o 00:41:57.036 CC test/nvme/connect_stress/connect_stress.o 00:41:57.973 LINK connect_stress 00:41:59.873 CC test/nvme/boot_partition/boot_partition.o 00:42:00.130 LINK part_ut 00:42:01.065 LINK boot_partition 00:42:15.941 CC test/nvme/compliance/nvme_compliance.o 00:42:15.941 CC examples/nvme/hello_world/hello_world.o 00:42:15.941 LINK nvme_compliance 00:42:15.941 LINK hello_world 00:42:19.235 CC examples/nvme/reconnect/reconnect.o 00:42:19.851 LINK reconnect 00:42:21.754 CC examples/nvme/nvme_manage/nvme_manage.o 00:42:23.655 LINK nvme_manage 00:42:38.531 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:42:38.531 LINK scsi_nvme_ut 00:42:46.645 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:42:48.020 LINK gpt_ut 00:42:49.396 CC examples/nvme/arbitration/arbitration.o 00:42:50.771 LINK arbitration 00:42:57.364 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:42:57.364 CC examples/nvme/hotplug/hotplug.o 00:42:57.622 LINK hotplug 00:43:00.908 LINK vbdev_lvol_ut 00:43:10.882 CC test/nvme/fused_ordering/fused_ordering.o 00:43:11.449 LINK fused_ordering 00:43:11.449 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:43:12.015 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:43:12.015 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:43:12.581 LINK bdev_zone_ut 00:43:12.839 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:43:13.773 LINK bdev_raid_sb_ut 00:43:14.340 LINK bdev_raid_ut 00:43:14.340 CC test/nvme/doorbell_aers/doorbell_aers.o 00:43:14.904 LINK doorbell_aers 00:43:16.280 CC test/nvme/fdp/fdp.o 00:43:16.848 LINK bdev_ut 00:43:17.415 LINK fdp 00:43:17.982 CC test/nvme/cuse/cuse.o 00:43:19.358 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:43:21.254 LINK vbdev_zone_block_ut 00:43:21.511 LINK cuse 00:43:26.777 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:43:27.344 LINK concat_ut 00:43:27.601 CC examples/nvme/cmb_copy/cmb_copy.o 00:43:28.166 LINK cmb_copy 00:43:29.099 CC examples/nvme/abort/abort.o 00:43:29.358 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:43:29.924 LINK pmr_persistence 00:43:30.183 LINK abort 00:43:30.183 CC examples/sock/hello_world/hello_sock.o 00:43:31.117 LINK hello_sock 00:43:33.651 CC examples/vmd/lsvmd/lsvmd.o 00:43:33.910 LINK lsvmd 00:43:33.910 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:43:35.286 LINK raid1_ut 00:43:50.193 CC examples/vmd/led/led.o 00:43:50.193 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:43:50.193 LINK led 00:43:52.726 LINK raid5f_ut 00:43:56.917 CC examples/nvmf/nvmf/nvmf.o 00:43:58.293 LINK nvmf 00:44:01.580 CC app/fio/bdev/fio_plugin.o 00:44:01.580 CC examples/util/zipf/zipf.o 00:44:02.516 LINK zipf 00:44:02.774 LINK spdk_bdev 00:44:05.306 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:44:09.493 CC examples/thread/thread/thread_ex.o 00:44:09.752 LINK thread 00:44:11.129 CC examples/interrupt_tgt/interrupt_tgt.o 00:44:11.129 CC examples/idxd/perf/perf.o 00:44:11.129 LINK bdev_nvme_ut 00:44:11.129 LINK interrupt_tgt 00:44:11.387 LINK idxd_perf 00:44:11.955 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:44:13.366 LINK blob_bdev_ut 00:44:13.626 CC test/unit/lib/blob/blob.c/blob_ut.o 00:44:16.155 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:44:16.413 LINK tree_ut 00:44:20.601 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:44:20.601 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:44:21.167 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:44:22.103 LINK blobfs_bdev_ut 00:44:22.362 LINK blobfs_async_ut 00:44:22.929 LINK blobfs_sync_ut 00:44:24.305 CC test/unit/lib/dma/dma.c/dma_ut.o 00:44:25.240 CC test/unit/lib/event/app.c/app_ut.o 00:44:25.806 LINK dma_ut 00:44:26.741 LINK blob_ut 00:44:27.677 LINK app_ut 00:44:32.947 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:44:34.863 LINK ioat_ut 00:44:37.426 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:44:39.336 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:44:39.336 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:44:39.595 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:44:39.855 LINK reactor_ut 00:44:40.424 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:44:40.424 LINK init_grp_ut 00:44:41.806 LINK conn_ut 00:44:45.095 LINK iscsi_ut 00:44:45.095 CC test/unit/lib/iscsi/param.c/param_ut.o 00:44:45.663 LINK json_parse_ut 00:44:47.042 LINK param_ut 00:44:48.949 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:44:50.323 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:44:51.256 LINK portal_grp_ut 00:44:51.256 LINK jsonrpc_server_ut 00:44:54.542 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:44:56.529 CC test/unit/lib/log/log.c/log_ut.o 00:44:56.529 LINK tgt_node_ut 00:44:57.467 LINK log_ut 00:44:58.035 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:45:02.228 CC test/unit/lib/notify/notify.c/notify_ut.o 00:45:02.228 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:45:02.228 LINK lvol_ut 00:45:02.487 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:45:03.056 LINK notify_ut 00:45:03.995 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:45:06.532 LINK nvme_ut 00:45:06.791 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:45:08.698 LINK ctrlr_ut 00:45:09.635 LINK tcp_ut 00:45:12.924 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:45:14.299 LINK nvme_ctrlr_ut 00:45:14.864 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:45:17.396 LINK nvme_ctrlr_cmd_ut 00:45:18.775 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:45:19.034 LINK subsystem_ut 00:45:22.325 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:45:22.325 LINK ctrlr_discovery_ut 00:45:23.263 LINK ctrlr_bdev_ut 00:45:23.832 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:45:23.832 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:45:25.738 LINK nvmf_ut 00:45:25.738 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:45:27.644 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:45:28.214 LINK nvme_ctrlr_ocssd_cmd_ut 00:45:28.214 LINK rdma_ut 00:45:31.504 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:45:31.764 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:45:31.764 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:45:31.764 LINK transport_ut 00:45:32.023 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:45:34.559 LINK nvme_ns_ut 00:45:36.465 LINK nvme_ns_ocssd_cmd_ut 00:45:36.465 LINK nvme_ns_cmd_ut 00:45:36.725 LINK nvme_pcie_ut 00:45:40.049 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:45:40.615 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:45:41.182 LINK nvme_poll_group_ut 00:45:41.441 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:45:41.700 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:45:41.700 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:45:41.700 LINK nvme_qpair_ut 00:45:42.269 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:45:42.528 LINK nvme_quirks_ut 00:45:43.097 LINK nvme_transport_ut 00:45:43.357 LINK nvme_io_msg_ut 00:45:43.616 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:45:43.616 LINK nvme_tcp_ut 00:45:43.616 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:45:43.875 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:45:44.135 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:45:44.703 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:45:44.703 LINK dev_ut 00:45:44.703 LINK nvme_opal_ut 00:45:44.962 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:45:45.221 LINK nvme_fabric_ut 00:45:45.221 LINK nvme_pcie_common_ut 00:45:45.221 LINK scsi_ut 00:45:45.221 LINK lun_ut 00:45:45.481 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:45:46.860 LINK scsi_bdev_ut 00:45:46.861 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:45:46.861 CC test/unit/lib/sock/sock.c/sock_ut.o 00:45:47.430 LINK scsi_pr_ut 00:45:47.999 CC test/unit/lib/sock/posix.c/posix_ut.o 00:45:48.259 CC test/unit/lib/thread/thread.c/thread_ut.o 00:45:48.259 LINK sock_ut 00:45:48.259 CC test/unit/lib/util/base64.c/base64_ut.o 00:45:48.519 LINK base64_ut 00:45:48.519 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:45:48.519 LINK posix_ut 00:45:49.088 LINK thread_ut 00:45:49.088 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:45:49.657 LINK bit_array_ut 00:45:49.916 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:45:49.916 LINK nvme_rdma_ut 00:45:50.176 LINK cpuset_ut 00:45:50.744 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:45:51.003 LINK crc16_ut 00:45:51.003 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:45:51.571 LINK crc32_ieee_ut 00:45:51.571 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:45:51.571 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:45:51.831 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:45:51.831 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:45:51.831 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:45:51.831 LINK pci_event_ut 00:45:52.090 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:45:52.090 LINK crc32c_ut 00:45:52.090 LINK iobuf_ut 00:45:52.090 LINK subsystem_ut 00:45:52.090 LINK crc64_ut 00:45:52.090 CC test/unit/lib/util/dif.c/dif_ut.o 00:45:52.349 LINK nvme_cuse_ut 00:45:52.608 CC test/unit/lib/util/iov.c/iov_ut.o 00:45:52.867 CC test/unit/lib/util/string.c/string_ut.o 00:45:52.867 LINK iov_ut 00:45:52.867 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:45:53.126 CC test/unit/lib/util/math.c/math_ut.o 00:45:53.126 LINK dif_ut 00:45:53.126 LINK string_ut 00:45:53.126 LINK math_ut 00:45:53.386 LINK pipe_ut 00:45:54.350 CC test/unit/lib/util/xor.c/xor_ut.o 00:45:54.614 LINK xor_ut 00:45:54.614 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:45:54.614 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:45:54.872 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:45:54.872 LINK rpc_ut 00:45:54.872 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:45:54.872 CC test/unit/lib/rdma/common.c/common_ut.o 00:45:54.872 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:45:54.872 LINK idxd_user_ut 00:45:55.131 LINK ftl_l2p_ut 00:45:55.131 LINK common_ut 00:45:55.131 LINK idxd_ut 00:45:55.390 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:45:55.390 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:45:55.649 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:45:55.649 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:45:55.649 LINK vhost_ut 00:45:55.649 LINK ftl_io_ut 00:45:55.649 LINK ftl_bitmap_ut 00:45:55.908 LINK ftl_band_ut 00:45:55.908 LINK ftl_mempool_ut 00:45:56.476 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:45:56.476 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:45:56.476 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:45:56.735 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:45:56.735 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:45:56.735 LINK ftl_mngt_ut 00:45:56.995 LINK json_util_ut 00:45:57.255 LINK ftl_sb_ut 00:45:57.255 LINK json_write_ut 00:45:57.255 LINK ftl_layout_upgrade_ut 00:46:43.926 json_parse_ut.c: In function ‘test_parse_nesting’: 00:46:43.926 json_parse_ut.c:616:1: note: variable tracking size limit exceeded with ‘-fvar-tracking-assignments’, retrying without 00:46:43.926 616 | test_parse_nesting(void) 00:46:43.926 | ^ 00:46:43.926 06:05:47 -- spdk/autopackage.sh@44 -- $ make -j10 clean 00:46:44.184 make[1]: Nothing to be done for 'clean'. 00:46:47.473 06:05:51 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:46:47.473 06:05:51 -- common/autotest_common.sh@718 -- $ xtrace_disable 00:46:47.473 06:05:51 -- common/autotest_common.sh@10 -- $ set +x 00:46:47.473 06:05:51 -- spdk/autopackage.sh@48 -- $ timing_finish 00:46:47.473 06:05:51 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:46:47.473 06:05:51 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:46:47.473 06:05:51 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:46:47.473 + [[ -n 2093 ]] 00:46:47.473 + sudo kill 2093 00:46:47.482 [Pipeline] } 00:46:47.499 [Pipeline] // timeout 00:46:47.505 [Pipeline] } 00:46:47.522 [Pipeline] // stage 00:46:47.530 [Pipeline] } 00:46:47.547 [Pipeline] // catchError 00:46:47.558 [Pipeline] stage 00:46:47.560 [Pipeline] { (Stop VM) 00:46:47.576 [Pipeline] sh 00:46:47.857 + vagrant halt 00:46:51.143 ==> default: Halting domain... 00:47:01.169 [Pipeline] sh 00:47:01.451 + vagrant destroy -f 00:47:04.094 ==> default: Removing domain... 00:47:04.671 [Pipeline] sh 00:47:04.950 + mv output /var/jenkins/workspace/ubuntu22-vg-autotest/output 00:47:04.961 [Pipeline] } 00:47:04.976 [Pipeline] // stage 00:47:04.982 [Pipeline] } 00:47:04.996 [Pipeline] // dir 00:47:05.002 [Pipeline] } 00:47:05.016 [Pipeline] // wrap 00:47:05.023 [Pipeline] } 00:47:05.035 [Pipeline] // catchError 00:47:05.044 [Pipeline] stage 00:47:05.046 [Pipeline] { (Epilogue) 00:47:05.058 [Pipeline] sh 00:47:05.340 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:47:20.236 [Pipeline] catchError 00:47:20.238 [Pipeline] { 00:47:20.251 [Pipeline] sh 00:47:20.534 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:47:20.794 Artifacts sizes are good 00:47:20.804 [Pipeline] } 00:47:20.818 [Pipeline] // catchError 00:47:20.829 [Pipeline] archiveArtifacts 00:47:20.836 Archiving artifacts 00:47:21.180 [Pipeline] cleanWs 00:47:21.192 [WS-CLEANUP] Deleting project workspace... 00:47:21.192 [WS-CLEANUP] Deferred wipeout is used... 00:47:21.198 [WS-CLEANUP] done 00:47:21.200 [Pipeline] } 00:47:21.216 [Pipeline] // stage 00:47:21.222 [Pipeline] } 00:47:21.236 [Pipeline] // node 00:47:21.242 [Pipeline] End of Pipeline 00:47:21.282 Finished: SUCCESS